2026-02-28 00:00:09.957874 | Job console starting 2026-02-28 00:00:09.991586 | Updating git repos 2026-02-28 00:00:10.111336 | Cloning repos into workspace 2026-02-28 00:00:10.504808 | Restoring repo states 2026-02-28 00:00:10.551154 | Merging changes 2026-02-28 00:00:10.551198 | Checking out repos 2026-02-28 00:00:11.101866 | Preparing playbooks 2026-02-28 00:00:12.493554 | Running Ansible setup 2026-02-28 00:00:21.504389 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2026-02-28 00:00:23.438996 | 2026-02-28 00:00:23.440020 | PLAY [Base pre] 2026-02-28 00:00:23.466625 | 2026-02-28 00:00:23.466744 | TASK [Setup log path fact] 2026-02-28 00:00:23.496583 | orchestrator | ok 2026-02-28 00:00:23.529331 | 2026-02-28 00:00:23.529473 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-02-28 00:00:23.585499 | orchestrator | ok 2026-02-28 00:00:23.603217 | 2026-02-28 00:00:23.603319 | TASK [emit-job-header : Print job information] 2026-02-28 00:00:23.674682 | # Job Information 2026-02-28 00:00:23.674820 | Ansible Version: 2.16.14 2026-02-28 00:00:23.674871 | Job: testbed-deploy-next-in-a-nutshell-with-tempest-ubuntu-24.04 2026-02-28 00:00:23.674898 | Pipeline: periodic-midnight 2026-02-28 00:00:23.674917 | Executor: 521e9411259a 2026-02-28 00:00:23.674933 | Triggered by: https://github.com/osism/testbed 2026-02-28 00:00:23.674951 | Event ID: da5c57b108b34da5b60920ea2a4bd68a 2026-02-28 00:00:23.680292 | 2026-02-28 00:00:23.680368 | LOOP [emit-job-header : Print node information] 2026-02-28 00:00:23.790264 | orchestrator | ok: 2026-02-28 00:00:23.790446 | orchestrator | # Node Information 2026-02-28 00:00:23.790477 | orchestrator | Inventory Hostname: orchestrator 2026-02-28 00:00:23.790498 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2026-02-28 00:00:23.790515 | orchestrator | Username: zuul-testbed01 2026-02-28 00:00:23.790533 | orchestrator | Distro: Debian 12.13 2026-02-28 00:00:23.790551 | orchestrator | Provider: static-testbed 2026-02-28 00:00:23.790569 | orchestrator | Region: 2026-02-28 00:00:23.790585 | orchestrator | Label: testbed-orchestrator 2026-02-28 00:00:23.790601 | orchestrator | Product Name: OpenStack Nova 2026-02-28 00:00:23.790616 | orchestrator | Interface IP: 81.163.193.140 2026-02-28 00:00:23.810374 | 2026-02-28 00:00:23.810474 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2026-02-28 00:00:24.733160 | orchestrator -> localhost | changed 2026-02-28 00:00:24.745083 | 2026-02-28 00:00:24.745183 | TASK [log-inventory : Copy ansible inventory to logs dir] 2026-02-28 00:00:26.637577 | orchestrator -> localhost | changed 2026-02-28 00:00:26.650663 | 2026-02-28 00:00:26.650765 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2026-02-28 00:00:27.647684 | orchestrator -> localhost | ok 2026-02-28 00:00:27.657707 | 2026-02-28 00:00:27.657806 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2026-02-28 00:00:27.703356 | orchestrator | ok 2026-02-28 00:00:27.738451 | orchestrator | included: /var/lib/zuul/builds/4580c583255a4bbaa1e0ce291d0fa749/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2026-02-28 00:00:27.748595 | 2026-02-28 00:00:27.748898 | TASK [add-build-sshkey : Create Temp SSH key] 2026-02-28 00:00:31.650690 | orchestrator -> localhost | Generating public/private rsa key pair. 2026-02-28 00:00:31.651789 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/4580c583255a4bbaa1e0ce291d0fa749/work/4580c583255a4bbaa1e0ce291d0fa749_id_rsa 2026-02-28 00:00:31.651843 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/4580c583255a4bbaa1e0ce291d0fa749/work/4580c583255a4bbaa1e0ce291d0fa749_id_rsa.pub 2026-02-28 00:00:31.652254 | orchestrator -> localhost | The key fingerprint is: 2026-02-28 00:00:31.652288 | orchestrator -> localhost | SHA256:TGDmEN2RyBIf3dvtFK3nlafo9EH1zrcCL4gB8yauWn4 zuul-build-sshkey 2026-02-28 00:00:31.652309 | orchestrator -> localhost | The key's randomart image is: 2026-02-28 00:00:31.652337 | orchestrator -> localhost | +---[RSA 3072]----+ 2026-02-28 00:00:31.652356 | orchestrator -> localhost | | +==+.+ . | 2026-02-28 00:00:31.652374 | orchestrator -> localhost | | .*+o+ . . o| 2026-02-28 00:00:31.652391 | orchestrator -> localhost | | .o . o . +o| 2026-02-28 00:00:31.652407 | orchestrator -> localhost | | o o . . =.=| 2026-02-28 00:00:31.652423 | orchestrator -> localhost | | + S = *o| 2026-02-28 00:00:31.652677 | orchestrator -> localhost | | . + .o + =| 2026-02-28 00:00:31.652707 | orchestrator -> localhost | | .. o o .oo. .o| 2026-02-28 00:00:31.652726 | orchestrator -> localhost | | o E . . ..o.. | 2026-02-28 00:00:31.652745 | orchestrator -> localhost | | ..oo . . | 2026-02-28 00:00:31.652762 | orchestrator -> localhost | +----[SHA256]-----+ 2026-02-28 00:00:31.652810 | orchestrator -> localhost | ok: Runtime: 0:00:02.313684 2026-02-28 00:00:31.665444 | 2026-02-28 00:00:31.666056 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2026-02-28 00:00:31.728185 | orchestrator | ok 2026-02-28 00:00:31.790324 | orchestrator | included: /var/lib/zuul/builds/4580c583255a4bbaa1e0ce291d0fa749/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2026-02-28 00:00:31.851355 | 2026-02-28 00:00:31.851463 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2026-02-28 00:00:31.932134 | orchestrator | skipping: Conditional result was False 2026-02-28 00:00:31.940436 | 2026-02-28 00:00:31.940549 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2026-02-28 00:00:32.652400 | orchestrator | changed 2026-02-28 00:00:32.699803 | 2026-02-28 00:00:32.699919 | TASK [add-build-sshkey : Make sure user has a .ssh] 2026-02-28 00:00:33.000087 | orchestrator | ok 2026-02-28 00:00:33.009196 | 2026-02-28 00:00:33.009319 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2026-02-28 00:00:33.635552 | orchestrator | ok 2026-02-28 00:00:33.641484 | 2026-02-28 00:00:33.641588 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2026-02-28 00:00:34.223274 | orchestrator | ok 2026-02-28 00:00:34.243677 | 2026-02-28 00:00:34.243793 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2026-02-28 00:00:34.299048 | orchestrator | skipping: Conditional result was False 2026-02-28 00:00:34.305732 | 2026-02-28 00:00:34.310991 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2026-02-28 00:00:35.125645 | orchestrator -> localhost | changed 2026-02-28 00:00:35.138752 | 2026-02-28 00:00:35.140659 | TASK [add-build-sshkey : Add back temp key] 2026-02-28 00:00:35.907291 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/4580c583255a4bbaa1e0ce291d0fa749/work/4580c583255a4bbaa1e0ce291d0fa749_id_rsa (zuul-build-sshkey) 2026-02-28 00:00:35.907480 | orchestrator -> localhost | ok: Runtime: 0:00:00.046305 2026-02-28 00:00:35.916007 | 2026-02-28 00:00:35.916106 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2026-02-28 00:00:36.573192 | orchestrator | ok 2026-02-28 00:00:36.603536 | 2026-02-28 00:00:36.603668 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2026-02-28 00:00:36.651809 | orchestrator | skipping: Conditional result was False 2026-02-28 00:00:36.957039 | 2026-02-28 00:00:36.957157 | TASK [start-zuul-console : Start zuul_console daemon.] 2026-02-28 00:00:37.596390 | orchestrator | ok 2026-02-28 00:00:37.620697 | 2026-02-28 00:00:37.620817 | TASK [validate-host : Define zuul_info_dir fact] 2026-02-28 00:00:37.753457 | orchestrator | ok 2026-02-28 00:00:37.784800 | 2026-02-28 00:00:37.784918 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2026-02-28 00:00:39.782002 | orchestrator -> localhost | ok 2026-02-28 00:00:39.796077 | 2026-02-28 00:00:39.796167 | TASK [validate-host : Collect information about the host] 2026-02-28 00:00:42.800207 | orchestrator | ok 2026-02-28 00:00:42.846815 | 2026-02-28 00:00:42.846943 | TASK [validate-host : Sanitize hostname] 2026-02-28 00:00:43.030813 | orchestrator | ok 2026-02-28 00:00:43.036645 | 2026-02-28 00:00:43.036735 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2026-02-28 00:00:45.651716 | orchestrator -> localhost | changed 2026-02-28 00:00:45.656848 | 2026-02-28 00:00:45.656923 | TASK [validate-host : Collect information about zuul worker] 2026-02-28 00:00:46.655282 | orchestrator | ok 2026-02-28 00:00:46.660535 | 2026-02-28 00:00:46.660623 | TASK [validate-host : Write out all zuul information for each host] 2026-02-28 00:00:49.283788 | orchestrator -> localhost | changed 2026-02-28 00:00:49.306169 | 2026-02-28 00:00:49.306276 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2026-02-28 00:00:49.680778 | orchestrator | ok 2026-02-28 00:00:49.688088 | 2026-02-28 00:00:49.688176 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2026-02-28 00:01:56.096316 | orchestrator | changed: 2026-02-28 00:01:56.097576 | orchestrator | .d..t...... src/ 2026-02-28 00:01:56.097653 | orchestrator | .d..t...... src/github.com/ 2026-02-28 00:01:56.097687 | orchestrator | .d..t...... src/github.com/osism/ 2026-02-28 00:01:56.097715 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2026-02-28 00:01:56.097742 | orchestrator | RedHat.yml 2026-02-28 00:01:56.129443 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2026-02-28 00:01:56.129468 | orchestrator | RedHat.yml 2026-02-28 00:01:56.129543 | orchestrator | = 1.53.0"... 2026-02-28 00:02:07.154578 | orchestrator | - Finding hashicorp/local versions matching ">= 2.2.0"... 2026-02-28 00:02:07.591903 | orchestrator | - Installing hashicorp/null v3.2.4... 2026-02-28 00:02:08.831010 | orchestrator | - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2026-02-28 00:02:09.113157 | orchestrator | - Installing terraform-provider-openstack/openstack v3.4.0... 2026-02-28 00:02:09.994855 | orchestrator | - Installed terraform-provider-openstack/openstack v3.4.0 (signed, key ID 4F80527A391BEFD2) 2026-02-28 00:02:10.382857 | orchestrator | - Installing hashicorp/local v2.7.0... 2026-02-28 00:02:11.019979 | orchestrator | - Installed hashicorp/local v2.7.0 (signed, key ID 0C0AF313E5FD9F80) 2026-02-28 00:02:11.020109 | orchestrator | 2026-02-28 00:02:11.020138 | orchestrator | Providers are signed by their developers. 2026-02-28 00:02:11.020163 | orchestrator | If you'd like to know more about provider signing, you can read about it here: 2026-02-28 00:02:11.020182 | orchestrator | https://opentofu.org/docs/cli/plugins/signing/ 2026-02-28 00:02:11.020207 | orchestrator | 2026-02-28 00:02:11.020226 | orchestrator | OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2026-02-28 00:02:11.020244 | orchestrator | selections it made above. Include this file in your version control repository 2026-02-28 00:02:11.020284 | orchestrator | so that OpenTofu can guarantee to make the same selections by default when 2026-02-28 00:02:11.020323 | orchestrator | you run "tofu init" in the future. 2026-02-28 00:02:11.020358 | orchestrator | 2026-02-28 00:02:11.020376 | orchestrator | OpenTofu has been successfully initialized! 2026-02-28 00:02:11.020392 | orchestrator | 2026-02-28 00:02:11.020410 | orchestrator | You may now begin working with OpenTofu. Try running "tofu plan" to see 2026-02-28 00:02:11.020427 | orchestrator | any changes that are required for your infrastructure. All OpenTofu commands 2026-02-28 00:02:11.020443 | orchestrator | should now work. 2026-02-28 00:02:11.020460 | orchestrator | 2026-02-28 00:02:11.020478 | orchestrator | If you ever set or change modules or backend configuration for OpenTofu, 2026-02-28 00:02:11.020496 | orchestrator | rerun this command to reinitialize your working directory. If you forget, other 2026-02-28 00:02:11.020514 | orchestrator | commands will detect it and remind you to do so if necessary. 2026-02-28 00:02:11.196717 | orchestrator | Created and switched to workspace "ci"! 2026-02-28 00:02:11.196842 | orchestrator | 2026-02-28 00:02:11.196860 | orchestrator | You're now on a new, empty workspace. Workspaces isolate their state, 2026-02-28 00:02:11.196874 | orchestrator | so if you run "tofu plan" OpenTofu will not see any existing state 2026-02-28 00:02:11.196885 | orchestrator | for this configuration. 2026-02-28 00:02:11.317036 | orchestrator | ci.auto.tfvars 2026-02-28 00:02:11.462060 | orchestrator | default_custom.tf 2026-02-28 00:02:15.756909 | orchestrator | data.openstack_networking_network_v2.public: Reading... 2026-02-28 00:02:16.295079 | orchestrator | data.openstack_networking_network_v2.public: Read complete after 0s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2026-02-28 00:02:16.495868 | orchestrator | 2026-02-28 00:02:16.495928 | orchestrator | OpenTofu used the selected providers to generate the following execution 2026-02-28 00:02:16.495936 | orchestrator | plan. Resource actions are indicated with the following symbols: 2026-02-28 00:02:16.495969 | orchestrator | + create 2026-02-28 00:02:16.495993 | orchestrator | <= read (data resources) 2026-02-28 00:02:16.496006 | orchestrator | 2026-02-28 00:02:16.496011 | orchestrator | OpenTofu will perform the following actions: 2026-02-28 00:02:16.496145 | orchestrator | 2026-02-28 00:02:16.496159 | orchestrator | # data.openstack_images_image_v2.image will be read during apply 2026-02-28 00:02:16.496164 | orchestrator | # (config refers to values not yet known) 2026-02-28 00:02:16.496168 | orchestrator | <= data "openstack_images_image_v2" "image" { 2026-02-28 00:02:16.496173 | orchestrator | + checksum = (known after apply) 2026-02-28 00:02:16.496177 | orchestrator | + created_at = (known after apply) 2026-02-28 00:02:16.496181 | orchestrator | + file = (known after apply) 2026-02-28 00:02:16.496185 | orchestrator | + id = (known after apply) 2026-02-28 00:02:16.496208 | orchestrator | + metadata = (known after apply) 2026-02-28 00:02:16.496212 | orchestrator | + min_disk_gb = (known after apply) 2026-02-28 00:02:16.496216 | orchestrator | + min_ram_mb = (known after apply) 2026-02-28 00:02:16.496220 | orchestrator | + most_recent = true 2026-02-28 00:02:16.496225 | orchestrator | + name = (known after apply) 2026-02-28 00:02:16.496229 | orchestrator | + protected = (known after apply) 2026-02-28 00:02:16.496233 | orchestrator | + region = (known after apply) 2026-02-28 00:02:16.496239 | orchestrator | + schema = (known after apply) 2026-02-28 00:02:16.496243 | orchestrator | + size_bytes = (known after apply) 2026-02-28 00:02:16.496247 | orchestrator | + tags = (known after apply) 2026-02-28 00:02:16.496251 | orchestrator | + updated_at = (known after apply) 2026-02-28 00:02:16.496254 | orchestrator | } 2026-02-28 00:02:16.496378 | orchestrator | 2026-02-28 00:02:16.496390 | orchestrator | # data.openstack_images_image_v2.image_node will be read during apply 2026-02-28 00:02:16.496395 | orchestrator | # (config refers to values not yet known) 2026-02-28 00:02:16.496399 | orchestrator | <= data "openstack_images_image_v2" "image_node" { 2026-02-28 00:02:16.496403 | orchestrator | + checksum = (known after apply) 2026-02-28 00:02:16.496407 | orchestrator | + created_at = (known after apply) 2026-02-28 00:02:16.496411 | orchestrator | + file = (known after apply) 2026-02-28 00:02:16.496415 | orchestrator | + id = (known after apply) 2026-02-28 00:02:16.496419 | orchestrator | + metadata = (known after apply) 2026-02-28 00:02:16.496423 | orchestrator | + min_disk_gb = (known after apply) 2026-02-28 00:02:16.496427 | orchestrator | + min_ram_mb = (known after apply) 2026-02-28 00:02:16.496431 | orchestrator | + most_recent = true 2026-02-28 00:02:16.496435 | orchestrator | + name = (known after apply) 2026-02-28 00:02:16.496439 | orchestrator | + protected = (known after apply) 2026-02-28 00:02:16.496442 | orchestrator | + region = (known after apply) 2026-02-28 00:02:16.496446 | orchestrator | + schema = (known after apply) 2026-02-28 00:02:16.496450 | orchestrator | + size_bytes = (known after apply) 2026-02-28 00:02:16.496454 | orchestrator | + tags = (known after apply) 2026-02-28 00:02:16.496458 | orchestrator | + updated_at = (known after apply) 2026-02-28 00:02:16.496461 | orchestrator | } 2026-02-28 00:02:16.496571 | orchestrator | 2026-02-28 00:02:16.496583 | orchestrator | # local_file.MANAGER_ADDRESS will be created 2026-02-28 00:02:16.496587 | orchestrator | + resource "local_file" "MANAGER_ADDRESS" { 2026-02-28 00:02:16.496592 | orchestrator | + content = (known after apply) 2026-02-28 00:02:16.496596 | orchestrator | + content_base64sha256 = (known after apply) 2026-02-28 00:02:16.496599 | orchestrator | + content_base64sha512 = (known after apply) 2026-02-28 00:02:16.496603 | orchestrator | + content_md5 = (known after apply) 2026-02-28 00:02:16.496607 | orchestrator | + content_sha1 = (known after apply) 2026-02-28 00:02:16.496611 | orchestrator | + content_sha256 = (known after apply) 2026-02-28 00:02:16.496615 | orchestrator | + content_sha512 = (known after apply) 2026-02-28 00:02:16.496619 | orchestrator | + directory_permission = "0777" 2026-02-28 00:02:16.496622 | orchestrator | + file_permission = "0644" 2026-02-28 00:02:16.496626 | orchestrator | + filename = ".MANAGER_ADDRESS.ci" 2026-02-28 00:02:16.496630 | orchestrator | + id = (known after apply) 2026-02-28 00:02:16.496634 | orchestrator | } 2026-02-28 00:02:16.496733 | orchestrator | 2026-02-28 00:02:16.496745 | orchestrator | # local_file.id_rsa_pub will be created 2026-02-28 00:02:16.496750 | orchestrator | + resource "local_file" "id_rsa_pub" { 2026-02-28 00:02:16.496754 | orchestrator | + content = (known after apply) 2026-02-28 00:02:16.496758 | orchestrator | + content_base64sha256 = (known after apply) 2026-02-28 00:02:16.496762 | orchestrator | + content_base64sha512 = (known after apply) 2026-02-28 00:02:16.496765 | orchestrator | + content_md5 = (known after apply) 2026-02-28 00:02:16.496769 | orchestrator | + content_sha1 = (known after apply) 2026-02-28 00:02:16.496773 | orchestrator | + content_sha256 = (known after apply) 2026-02-28 00:02:16.496777 | orchestrator | + content_sha512 = (known after apply) 2026-02-28 00:02:16.496781 | orchestrator | + directory_permission = "0777" 2026-02-28 00:02:16.496785 | orchestrator | + file_permission = "0644" 2026-02-28 00:02:16.496813 | orchestrator | + filename = ".id_rsa.ci.pub" 2026-02-28 00:02:16.496817 | orchestrator | + id = (known after apply) 2026-02-28 00:02:16.496821 | orchestrator | } 2026-02-28 00:02:16.496917 | orchestrator | 2026-02-28 00:02:16.496941 | orchestrator | # local_file.inventory will be created 2026-02-28 00:02:16.496946 | orchestrator | + resource "local_file" "inventory" { 2026-02-28 00:02:16.496950 | orchestrator | + content = (known after apply) 2026-02-28 00:02:16.496954 | orchestrator | + content_base64sha256 = (known after apply) 2026-02-28 00:02:16.496958 | orchestrator | + content_base64sha512 = (known after apply) 2026-02-28 00:02:16.496962 | orchestrator | + content_md5 = (known after apply) 2026-02-28 00:02:16.496965 | orchestrator | + content_sha1 = (known after apply) 2026-02-28 00:02:16.496969 | orchestrator | + content_sha256 = (known after apply) 2026-02-28 00:02:16.496973 | orchestrator | + content_sha512 = (known after apply) 2026-02-28 00:02:16.496977 | orchestrator | + directory_permission = "0777" 2026-02-28 00:02:16.496981 | orchestrator | + file_permission = "0644" 2026-02-28 00:02:16.496985 | orchestrator | + filename = "inventory.ci" 2026-02-28 00:02:16.496989 | orchestrator | + id = (known after apply) 2026-02-28 00:02:16.496993 | orchestrator | } 2026-02-28 00:02:16.497086 | orchestrator | 2026-02-28 00:02:16.497097 | orchestrator | # local_sensitive_file.id_rsa will be created 2026-02-28 00:02:16.497102 | orchestrator | + resource "local_sensitive_file" "id_rsa" { 2026-02-28 00:02:16.497106 | orchestrator | + content = (sensitive value) 2026-02-28 00:02:16.497110 | orchestrator | + content_base64sha256 = (known after apply) 2026-02-28 00:02:16.497113 | orchestrator | + content_base64sha512 = (known after apply) 2026-02-28 00:02:16.497117 | orchestrator | + content_md5 = (known after apply) 2026-02-28 00:02:16.497121 | orchestrator | + content_sha1 = (known after apply) 2026-02-28 00:02:16.497125 | orchestrator | + content_sha256 = (known after apply) 2026-02-28 00:02:16.497129 | orchestrator | + content_sha512 = (known after apply) 2026-02-28 00:02:16.497132 | orchestrator | + directory_permission = "0700" 2026-02-28 00:02:16.497136 | orchestrator | + file_permission = "0600" 2026-02-28 00:02:16.497140 | orchestrator | + filename = ".id_rsa.ci" 2026-02-28 00:02:16.497144 | orchestrator | + id = (known after apply) 2026-02-28 00:02:16.497148 | orchestrator | } 2026-02-28 00:02:16.497173 | orchestrator | 2026-02-28 00:02:16.497184 | orchestrator | # null_resource.node_semaphore will be created 2026-02-28 00:02:16.497188 | orchestrator | + resource "null_resource" "node_semaphore" { 2026-02-28 00:02:16.497192 | orchestrator | + id = (known after apply) 2026-02-28 00:02:16.497196 | orchestrator | } 2026-02-28 00:02:16.497285 | orchestrator | 2026-02-28 00:02:16.497296 | orchestrator | # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2026-02-28 00:02:16.497301 | orchestrator | + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2026-02-28 00:02:16.497305 | orchestrator | + attachment = (known after apply) 2026-02-28 00:02:16.497309 | orchestrator | + availability_zone = "nova" 2026-02-28 00:02:16.497313 | orchestrator | + id = (known after apply) 2026-02-28 00:02:16.497316 | orchestrator | + image_id = (known after apply) 2026-02-28 00:02:16.497320 | orchestrator | + metadata = (known after apply) 2026-02-28 00:02:16.497324 | orchestrator | + name = "testbed-volume-manager-base" 2026-02-28 00:02:16.497328 | orchestrator | + region = (known after apply) 2026-02-28 00:02:16.497332 | orchestrator | + size = 80 2026-02-28 00:02:16.497336 | orchestrator | + volume_retype_policy = "never" 2026-02-28 00:02:16.497339 | orchestrator | + volume_type = "ssd" 2026-02-28 00:02:16.497343 | orchestrator | } 2026-02-28 00:02:16.497429 | orchestrator | 2026-02-28 00:02:16.497440 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2026-02-28 00:02:16.497445 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-02-28 00:02:16.497449 | orchestrator | + attachment = (known after apply) 2026-02-28 00:02:16.497452 | orchestrator | + availability_zone = "nova" 2026-02-28 00:02:16.497457 | orchestrator | + id = (known after apply) 2026-02-28 00:02:16.497464 | orchestrator | + image_id = (known after apply) 2026-02-28 00:02:16.497468 | orchestrator | + metadata = (known after apply) 2026-02-28 00:02:16.497472 | orchestrator | + name = "testbed-volume-0-node-base" 2026-02-28 00:02:16.497476 | orchestrator | + region = (known after apply) 2026-02-28 00:02:16.497480 | orchestrator | + size = 80 2026-02-28 00:02:16.497484 | orchestrator | + volume_retype_policy = "never" 2026-02-28 00:02:16.497488 | orchestrator | + volume_type = "ssd" 2026-02-28 00:02:16.497491 | orchestrator | } 2026-02-28 00:02:16.497577 | orchestrator | 2026-02-28 00:02:16.497588 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2026-02-28 00:02:16.497592 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-02-28 00:02:16.497596 | orchestrator | + attachment = (known after apply) 2026-02-28 00:02:16.497600 | orchestrator | + availability_zone = "nova" 2026-02-28 00:02:16.497604 | orchestrator | + id = (known after apply) 2026-02-28 00:02:16.497608 | orchestrator | + image_id = (known after apply) 2026-02-28 00:02:16.497612 | orchestrator | + metadata = (known after apply) 2026-02-28 00:02:16.497615 | orchestrator | + name = "testbed-volume-1-node-base" 2026-02-28 00:02:16.497619 | orchestrator | + region = (known after apply) 2026-02-28 00:02:16.497623 | orchestrator | + size = 80 2026-02-28 00:02:16.497627 | orchestrator | + volume_retype_policy = "never" 2026-02-28 00:02:16.497631 | orchestrator | + volume_type = "ssd" 2026-02-28 00:02:16.497634 | orchestrator | } 2026-02-28 00:02:16.497718 | orchestrator | 2026-02-28 00:02:16.497729 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2026-02-28 00:02:16.497734 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-02-28 00:02:16.497738 | orchestrator | + attachment = (known after apply) 2026-02-28 00:02:16.497741 | orchestrator | + availability_zone = "nova" 2026-02-28 00:02:16.497745 | orchestrator | + id = (known after apply) 2026-02-28 00:02:16.497749 | orchestrator | + image_id = (known after apply) 2026-02-28 00:02:16.497753 | orchestrator | + metadata = (known after apply) 2026-02-28 00:02:16.497757 | orchestrator | + name = "testbed-volume-2-node-base" 2026-02-28 00:02:16.497760 | orchestrator | + region = (known after apply) 2026-02-28 00:02:16.497764 | orchestrator | + size = 80 2026-02-28 00:02:16.497768 | orchestrator | + volume_retype_policy = "never" 2026-02-28 00:02:16.497772 | orchestrator | + volume_type = "ssd" 2026-02-28 00:02:16.497776 | orchestrator | } 2026-02-28 00:02:16.497902 | orchestrator | 2026-02-28 00:02:16.497915 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2026-02-28 00:02:16.497920 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-02-28 00:02:16.497924 | orchestrator | + attachment = (known after apply) 2026-02-28 00:02:16.497927 | orchestrator | + availability_zone = "nova" 2026-02-28 00:02:16.497931 | orchestrator | + id = (known after apply) 2026-02-28 00:02:16.497935 | orchestrator | + image_id = (known after apply) 2026-02-28 00:02:16.497939 | orchestrator | + metadata = (known after apply) 2026-02-28 00:02:16.497947 | orchestrator | + name = "testbed-volume-3-node-base" 2026-02-28 00:02:16.497951 | orchestrator | + region = (known after apply) 2026-02-28 00:02:16.497955 | orchestrator | + size = 80 2026-02-28 00:02:16.497959 | orchestrator | + volume_retype_policy = "never" 2026-02-28 00:02:16.497963 | orchestrator | + volume_type = "ssd" 2026-02-28 00:02:16.497967 | orchestrator | } 2026-02-28 00:02:16.498069 | orchestrator | 2026-02-28 00:02:16.498081 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2026-02-28 00:02:16.498086 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-02-28 00:02:16.498090 | orchestrator | + attachment = (known after apply) 2026-02-28 00:02:16.498094 | orchestrator | + availability_zone = "nova" 2026-02-28 00:02:16.498098 | orchestrator | + id = (known after apply) 2026-02-28 00:02:16.498106 | orchestrator | + image_id = (known after apply) 2026-02-28 00:02:16.498110 | orchestrator | + metadata = (known after apply) 2026-02-28 00:02:16.498113 | orchestrator | + name = "testbed-volume-4-node-base" 2026-02-28 00:02:16.498117 | orchestrator | + region = (known after apply) 2026-02-28 00:02:16.498121 | orchestrator | + size = 80 2026-02-28 00:02:16.498125 | orchestrator | + volume_retype_policy = "never" 2026-02-28 00:02:16.498128 | orchestrator | + volume_type = "ssd" 2026-02-28 00:02:16.498132 | orchestrator | } 2026-02-28 00:02:16.498215 | orchestrator | 2026-02-28 00:02:16.498226 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2026-02-28 00:02:16.498231 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-02-28 00:02:16.498234 | orchestrator | + attachment = (known after apply) 2026-02-28 00:02:16.498238 | orchestrator | + availability_zone = "nova" 2026-02-28 00:02:16.498242 | orchestrator | + id = (known after apply) 2026-02-28 00:02:16.498246 | orchestrator | + image_id = (known after apply) 2026-02-28 00:02:16.498250 | orchestrator | + metadata = (known after apply) 2026-02-28 00:02:16.498254 | orchestrator | + name = "testbed-volume-5-node-base" 2026-02-28 00:02:16.498257 | orchestrator | + region = (known after apply) 2026-02-28 00:02:16.498261 | orchestrator | + size = 80 2026-02-28 00:02:16.498265 | orchestrator | + volume_retype_policy = "never" 2026-02-28 00:02:16.498269 | orchestrator | + volume_type = "ssd" 2026-02-28 00:02:16.498273 | orchestrator | } 2026-02-28 00:02:16.498358 | orchestrator | 2026-02-28 00:02:16.498369 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[0] will be created 2026-02-28 00:02:16.498374 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-28 00:02:16.498377 | orchestrator | + attachment = (known after apply) 2026-02-28 00:02:16.498381 | orchestrator | + availability_zone = "nova" 2026-02-28 00:02:16.498385 | orchestrator | + id = (known after apply) 2026-02-28 00:02:16.498389 | orchestrator | + metadata = (known after apply) 2026-02-28 00:02:16.498393 | orchestrator | + name = "testbed-volume-0-node-3" 2026-02-28 00:02:16.498397 | orchestrator | + region = (known after apply) 2026-02-28 00:02:16.498400 | orchestrator | + size = 20 2026-02-28 00:02:16.498404 | orchestrator | + volume_retype_policy = "never" 2026-02-28 00:02:16.498408 | orchestrator | + volume_type = "ssd" 2026-02-28 00:02:16.498412 | orchestrator | } 2026-02-28 00:02:16.498504 | orchestrator | 2026-02-28 00:02:16.498516 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[1] will be created 2026-02-28 00:02:16.498521 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-28 00:02:16.498525 | orchestrator | + attachment = (known after apply) 2026-02-28 00:02:16.498528 | orchestrator | + availability_zone = "nova" 2026-02-28 00:02:16.498532 | orchestrator | + id = (known after apply) 2026-02-28 00:02:16.498536 | orchestrator | + metadata = (known after apply) 2026-02-28 00:02:16.498540 | orchestrator | + name = "testbed-volume-1-node-4" 2026-02-28 00:02:16.498544 | orchestrator | + region = (known after apply) 2026-02-28 00:02:16.498547 | orchestrator | + size = 20 2026-02-28 00:02:16.498551 | orchestrator | + volume_retype_policy = "never" 2026-02-28 00:02:16.498555 | orchestrator | + volume_type = "ssd" 2026-02-28 00:02:16.498559 | orchestrator | } 2026-02-28 00:02:16.498637 | orchestrator | 2026-02-28 00:02:16.498648 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[2] will be created 2026-02-28 00:02:16.498653 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-28 00:02:16.498657 | orchestrator | + attachment = (known after apply) 2026-02-28 00:02:16.498661 | orchestrator | + availability_zone = "nova" 2026-02-28 00:02:16.498664 | orchestrator | + id = (known after apply) 2026-02-28 00:02:16.498668 | orchestrator | + metadata = (known after apply) 2026-02-28 00:02:16.498672 | orchestrator | + name = "testbed-volume-2-node-5" 2026-02-28 00:02:16.498676 | orchestrator | + region = (known after apply) 2026-02-28 00:02:16.498686 | orchestrator | + size = 20 2026-02-28 00:02:16.498690 | orchestrator | + volume_retype_policy = "never" 2026-02-28 00:02:16.498693 | orchestrator | + volume_type = "ssd" 2026-02-28 00:02:16.498697 | orchestrator | } 2026-02-28 00:02:16.498776 | orchestrator | 2026-02-28 00:02:16.498788 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[3] will be created 2026-02-28 00:02:16.498811 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-28 00:02:16.498815 | orchestrator | + attachment = (known after apply) 2026-02-28 00:02:16.498818 | orchestrator | + availability_zone = "nova" 2026-02-28 00:02:16.498822 | orchestrator | + id = (known after apply) 2026-02-28 00:02:16.498826 | orchestrator | + metadata = (known after apply) 2026-02-28 00:02:16.498830 | orchestrator | + name = "testbed-volume-3-node-3" 2026-02-28 00:02:16.498834 | orchestrator | + region = (known after apply) 2026-02-28 00:02:16.498838 | orchestrator | + size = 20 2026-02-28 00:02:16.498841 | orchestrator | + volume_retype_policy = "never" 2026-02-28 00:02:16.498845 | orchestrator | + volume_type = "ssd" 2026-02-28 00:02:16.498849 | orchestrator | } 2026-02-28 00:02:16.498929 | orchestrator | 2026-02-28 00:02:16.498940 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[4] will be created 2026-02-28 00:02:16.498945 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-28 00:02:16.498948 | orchestrator | + attachment = (known after apply) 2026-02-28 00:02:16.498952 | orchestrator | + availability_zone = "nova" 2026-02-28 00:02:16.498956 | orchestrator | + id = (known after apply) 2026-02-28 00:02:16.498960 | orchestrator | + metadata = (known after apply) 2026-02-28 00:02:16.498964 | orchestrator | + name = "testbed-volume-4-node-4" 2026-02-28 00:02:16.498968 | orchestrator | + region = (known after apply) 2026-02-28 00:02:16.498975 | orchestrator | + size = 20 2026-02-28 00:02:16.498979 | orchestrator | + volume_retype_policy = "never" 2026-02-28 00:02:16.498983 | orchestrator | + volume_type = "ssd" 2026-02-28 00:02:16.498987 | orchestrator | } 2026-02-28 00:02:16.499069 | orchestrator | 2026-02-28 00:02:16.499080 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[5] will be created 2026-02-28 00:02:16.499084 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-28 00:02:16.499088 | orchestrator | + attachment = (known after apply) 2026-02-28 00:02:16.499092 | orchestrator | + availability_zone = "nova" 2026-02-28 00:02:16.499096 | orchestrator | + id = (known after apply) 2026-02-28 00:02:16.499100 | orchestrator | + metadata = (known after apply) 2026-02-28 00:02:16.499104 | orchestrator | + name = "testbed-volume-5-node-5" 2026-02-28 00:02:16.499107 | orchestrator | + region = (known after apply) 2026-02-28 00:02:16.499111 | orchestrator | + size = 20 2026-02-28 00:02:16.499115 | orchestrator | + volume_retype_policy = "never" 2026-02-28 00:02:16.499119 | orchestrator | + volume_type = "ssd" 2026-02-28 00:02:16.499122 | orchestrator | } 2026-02-28 00:02:16.499206 | orchestrator | 2026-02-28 00:02:16.499217 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[6] will be created 2026-02-28 00:02:16.499221 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-28 00:02:16.499225 | orchestrator | + attachment = (known after apply) 2026-02-28 00:02:16.499229 | orchestrator | + availability_zone = "nova" 2026-02-28 00:02:16.499233 | orchestrator | + id = (known after apply) 2026-02-28 00:02:16.499236 | orchestrator | + metadata = (known after apply) 2026-02-28 00:02:16.499240 | orchestrator | + name = "testbed-volume-6-node-3" 2026-02-28 00:02:16.499244 | orchestrator | + region = (known after apply) 2026-02-28 00:02:16.499248 | orchestrator | + size = 20 2026-02-28 00:02:16.499252 | orchestrator | + volume_retype_policy = "never" 2026-02-28 00:02:16.499255 | orchestrator | + volume_type = "ssd" 2026-02-28 00:02:16.499259 | orchestrator | } 2026-02-28 00:02:16.499337 | orchestrator | 2026-02-28 00:02:16.499348 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[7] will be created 2026-02-28 00:02:16.499352 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-28 00:02:16.499361 | orchestrator | + attachment = (known after apply) 2026-02-28 00:02:16.499365 | orchestrator | + availability_zone = "nova" 2026-02-28 00:02:16.499369 | orchestrator | + id = (known after apply) 2026-02-28 00:02:16.499372 | orchestrator | + metadata = (known after apply) 2026-02-28 00:02:16.499376 | orchestrator | + name = "testbed-volume-7-node-4" 2026-02-28 00:02:16.499380 | orchestrator | + region = (known after apply) 2026-02-28 00:02:16.499384 | orchestrator | + size = 20 2026-02-28 00:02:16.499388 | orchestrator | + volume_retype_policy = "never" 2026-02-28 00:02:16.499391 | orchestrator | + volume_type = "ssd" 2026-02-28 00:02:16.499395 | orchestrator | } 2026-02-28 00:02:16.499471 | orchestrator | 2026-02-28 00:02:16.499482 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[8] will be created 2026-02-28 00:02:16.499486 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-28 00:02:16.499490 | orchestrator | + attachment = (known after apply) 2026-02-28 00:02:16.499494 | orchestrator | + availability_zone = "nova" 2026-02-28 00:02:16.499498 | orchestrator | + id = (known after apply) 2026-02-28 00:02:16.499501 | orchestrator | + metadata = (known after apply) 2026-02-28 00:02:16.499505 | orchestrator | + name = "testbed-volume-8-node-5" 2026-02-28 00:02:16.499509 | orchestrator | + region = (known after apply) 2026-02-28 00:02:16.499513 | orchestrator | + size = 20 2026-02-28 00:02:16.499517 | orchestrator | + volume_retype_policy = "never" 2026-02-28 00:02:16.499521 | orchestrator | + volume_type = "ssd" 2026-02-28 00:02:16.499524 | orchestrator | } 2026-02-28 00:02:16.499965 | orchestrator | 2026-02-28 00:02:16.499998 | orchestrator | # openstack_compute_instance_v2.manager_server will be created 2026-02-28 00:02:16.500003 | orchestrator | + resource "openstack_compute_instance_v2" "manager_server" { 2026-02-28 00:02:16.500007 | orchestrator | + access_ip_v4 = (known after apply) 2026-02-28 00:02:16.500011 | orchestrator | + access_ip_v6 = (known after apply) 2026-02-28 00:02:16.500015 | orchestrator | + all_metadata = (known after apply) 2026-02-28 00:02:16.500019 | orchestrator | + all_tags = (known after apply) 2026-02-28 00:02:16.500023 | orchestrator | + availability_zone = "nova" 2026-02-28 00:02:16.500027 | orchestrator | + config_drive = true 2026-02-28 00:02:16.500031 | orchestrator | + created = (known after apply) 2026-02-28 00:02:16.500035 | orchestrator | + flavor_id = (known after apply) 2026-02-28 00:02:16.500038 | orchestrator | + flavor_name = "OSISM-4V-16" 2026-02-28 00:02:16.500042 | orchestrator | + force_delete = false 2026-02-28 00:02:16.500046 | orchestrator | + hypervisor_hostname = (known after apply) 2026-02-28 00:02:16.500050 | orchestrator | + id = (known after apply) 2026-02-28 00:02:16.500054 | orchestrator | + image_id = (known after apply) 2026-02-28 00:02:16.500057 | orchestrator | + image_name = (known after apply) 2026-02-28 00:02:16.500061 | orchestrator | + key_pair = "testbed" 2026-02-28 00:02:16.500065 | orchestrator | + name = "testbed-manager" 2026-02-28 00:02:16.500069 | orchestrator | + power_state = "active" 2026-02-28 00:02:16.500073 | orchestrator | + region = (known after apply) 2026-02-28 00:02:16.500077 | orchestrator | + security_groups = (known after apply) 2026-02-28 00:02:16.500080 | orchestrator | + stop_before_destroy = false 2026-02-28 00:02:16.500084 | orchestrator | + updated = (known after apply) 2026-02-28 00:02:16.500088 | orchestrator | + user_data = (sensitive value) 2026-02-28 00:02:16.500092 | orchestrator | 2026-02-28 00:02:16.500096 | orchestrator | + block_device { 2026-02-28 00:02:16.500100 | orchestrator | + boot_index = 0 2026-02-28 00:02:16.500104 | orchestrator | + delete_on_termination = false 2026-02-28 00:02:16.500112 | orchestrator | + destination_type = "volume" 2026-02-28 00:02:16.500116 | orchestrator | + multiattach = false 2026-02-28 00:02:16.500120 | orchestrator | + source_type = "volume" 2026-02-28 00:02:16.500124 | orchestrator | + uuid = (known after apply) 2026-02-28 00:02:16.500134 | orchestrator | } 2026-02-28 00:02:16.500138 | orchestrator | 2026-02-28 00:02:16.500142 | orchestrator | + network { 2026-02-28 00:02:16.500146 | orchestrator | + access_network = false 2026-02-28 00:02:16.500149 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-02-28 00:02:16.500153 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-02-28 00:02:16.500157 | orchestrator | + mac = (known after apply) 2026-02-28 00:02:16.500161 | orchestrator | + name = (known after apply) 2026-02-28 00:02:16.500165 | orchestrator | + port = (known after apply) 2026-02-28 00:02:16.500169 | orchestrator | + uuid = (known after apply) 2026-02-28 00:02:16.500172 | orchestrator | } 2026-02-28 00:02:16.500176 | orchestrator | } 2026-02-28 00:02:16.500434 | orchestrator | 2026-02-28 00:02:16.500445 | orchestrator | # openstack_compute_instance_v2.node_server[0] will be created 2026-02-28 00:02:16.500450 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-02-28 00:02:16.500454 | orchestrator | + access_ip_v4 = (known after apply) 2026-02-28 00:02:16.500458 | orchestrator | + access_ip_v6 = (known after apply) 2026-02-28 00:02:16.500462 | orchestrator | + all_metadata = (known after apply) 2026-02-28 00:02:16.500465 | orchestrator | + all_tags = (known after apply) 2026-02-28 00:02:16.500469 | orchestrator | + availability_zone = "nova" 2026-02-28 00:02:16.500473 | orchestrator | + config_drive = true 2026-02-28 00:02:16.500477 | orchestrator | + created = (known after apply) 2026-02-28 00:02:16.500480 | orchestrator | + flavor_id = (known after apply) 2026-02-28 00:02:16.500484 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-02-28 00:02:16.500488 | orchestrator | + force_delete = false 2026-02-28 00:02:16.500492 | orchestrator | + hypervisor_hostname = (known after apply) 2026-02-28 00:02:16.500496 | orchestrator | + id = (known after apply) 2026-02-28 00:02:16.500500 | orchestrator | + image_id = (known after apply) 2026-02-28 00:02:16.500503 | orchestrator | + image_name = (known after apply) 2026-02-28 00:02:16.500507 | orchestrator | + key_pair = "testbed" 2026-02-28 00:02:16.500511 | orchestrator | + name = "testbed-node-0" 2026-02-28 00:02:16.500515 | orchestrator | + power_state = "active" 2026-02-28 00:02:16.500519 | orchestrator | + region = (known after apply) 2026-02-28 00:02:16.500522 | orchestrator | + security_groups = (known after apply) 2026-02-28 00:02:16.500526 | orchestrator | + stop_before_destroy = false 2026-02-28 00:02:16.500530 | orchestrator | + updated = (known after apply) 2026-02-28 00:02:16.500534 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-02-28 00:02:16.500538 | orchestrator | 2026-02-28 00:02:16.500542 | orchestrator | + block_device { 2026-02-28 00:02:16.500545 | orchestrator | + boot_index = 0 2026-02-28 00:02:16.500549 | orchestrator | + delete_on_termination = false 2026-02-28 00:02:16.500553 | orchestrator | + destination_type = "volume" 2026-02-28 00:02:16.500557 | orchestrator | + multiattach = false 2026-02-28 00:02:16.500561 | orchestrator | + source_type = "volume" 2026-02-28 00:02:16.500564 | orchestrator | + uuid = (known after apply) 2026-02-28 00:02:16.500568 | orchestrator | } 2026-02-28 00:02:16.500572 | orchestrator | 2026-02-28 00:02:16.500576 | orchestrator | + network { 2026-02-28 00:02:16.500580 | orchestrator | + access_network = false 2026-02-28 00:02:16.500584 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-02-28 00:02:16.500587 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-02-28 00:02:16.500591 | orchestrator | + mac = (known after apply) 2026-02-28 00:02:16.500595 | orchestrator | + name = (known after apply) 2026-02-28 00:02:16.500599 | orchestrator | + port = (known after apply) 2026-02-28 00:02:16.500603 | orchestrator | + uuid = (known after apply) 2026-02-28 00:02:16.500607 | orchestrator | } 2026-02-28 00:02:16.500611 | orchestrator | } 2026-02-28 00:02:16.500893 | orchestrator | 2026-02-28 00:02:16.500907 | orchestrator | # openstack_compute_instance_v2.node_server[1] will be created 2026-02-28 00:02:16.500911 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-02-28 00:02:16.500915 | orchestrator | + access_ip_v4 = (known after apply) 2026-02-28 00:02:16.500924 | orchestrator | + access_ip_v6 = (known after apply) 2026-02-28 00:02:16.500927 | orchestrator | + all_metadata = (known after apply) 2026-02-28 00:02:16.500931 | orchestrator | + all_tags = (known after apply) 2026-02-28 00:02:16.500935 | orchestrator | + availability_zone = "nova" 2026-02-28 00:02:16.500939 | orchestrator | + config_drive = true 2026-02-28 00:02:16.500943 | orchestrator | + created = (known after apply) 2026-02-28 00:02:16.500946 | orchestrator | + flavor_id = (known after apply) 2026-02-28 00:02:16.500950 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-02-28 00:02:16.500954 | orchestrator | + force_delete = false 2026-02-28 00:02:16.500958 | orchestrator | + hypervisor_hostname = (known after apply) 2026-02-28 00:02:16.500962 | orchestrator | + id = (known after apply) 2026-02-28 00:02:16.500965 | orchestrator | + image_id = (known after apply) 2026-02-28 00:02:16.500969 | orchestrator | + image_name = (known after apply) 2026-02-28 00:02:16.500973 | orchestrator | + key_pair = "testbed" 2026-02-28 00:02:16.500977 | orchestrator | + name = "testbed-node-1" 2026-02-28 00:02:16.500980 | orchestrator | + power_state = "active" 2026-02-28 00:02:16.500984 | orchestrator | + region = (known after apply) 2026-02-28 00:02:16.500988 | orchestrator | + security_groups = (known after apply) 2026-02-28 00:02:16.500992 | orchestrator | + stop_before_destroy = false 2026-02-28 00:02:16.500996 | orchestrator | + updated = (known after apply) 2026-02-28 00:02:16.500999 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-02-28 00:02:16.501003 | orchestrator | 2026-02-28 00:02:16.501007 | orchestrator | + block_device { 2026-02-28 00:02:16.501011 | orchestrator | + boot_index = 0 2026-02-28 00:02:16.501015 | orchestrator | + delete_on_termination = false 2026-02-28 00:02:16.501019 | orchestrator | + destination_type = "volume" 2026-02-28 00:02:16.501022 | orchestrator | + multiattach = false 2026-02-28 00:02:16.501026 | orchestrator | + source_type = "volume" 2026-02-28 00:02:16.501030 | orchestrator | + uuid = (known after apply) 2026-02-28 00:02:16.501034 | orchestrator | } 2026-02-28 00:02:16.501038 | orchestrator | 2026-02-28 00:02:16.501042 | orchestrator | + network { 2026-02-28 00:02:16.501045 | orchestrator | + access_network = false 2026-02-28 00:02:16.501049 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-02-28 00:02:16.501053 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-02-28 00:02:16.501057 | orchestrator | + mac = (known after apply) 2026-02-28 00:02:16.501060 | orchestrator | + name = (known after apply) 2026-02-28 00:02:16.501064 | orchestrator | + port = (known after apply) 2026-02-28 00:02:16.501068 | orchestrator | + uuid = (known after apply) 2026-02-28 00:02:16.501072 | orchestrator | } 2026-02-28 00:02:16.501076 | orchestrator | } 2026-02-28 00:02:16.501493 | orchestrator | 2026-02-28 00:02:16.501631 | orchestrator | # openstack_compute_instance_v2.node_server[2] will be created 2026-02-28 00:02:16.501637 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-02-28 00:02:16.501641 | orchestrator | + access_ip_v4 = (known after apply) 2026-02-28 00:02:16.501645 | orchestrator | + access_ip_v6 = (known after apply) 2026-02-28 00:02:16.501651 | orchestrator | + all_metadata = (known after apply) 2026-02-28 00:02:16.501655 | orchestrator | + all_tags = (known after apply) 2026-02-28 00:02:16.501664 | orchestrator | + availability_zone = "nova" 2026-02-28 00:02:16.501668 | orchestrator | + config_drive = true 2026-02-28 00:02:16.501672 | orchestrator | + created = (known after apply) 2026-02-28 00:02:16.501676 | orchestrator | + flavor_id = (known after apply) 2026-02-28 00:02:16.501680 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-02-28 00:02:16.501683 | orchestrator | + force_delete = false 2026-02-28 00:02:16.501687 | orchestrator | + hypervisor_hostname = (known after apply) 2026-02-28 00:02:16.501691 | orchestrator | + id = (known after apply) 2026-02-28 00:02:16.501695 | orchestrator | + image_id = (known after apply) 2026-02-28 00:02:16.501703 | orchestrator | + image_name = (known after apply) 2026-02-28 00:02:16.501707 | orchestrator | + key_pair = "testbed" 2026-02-28 00:02:16.501711 | orchestrator | + name = "testbed-node-2" 2026-02-28 00:02:16.501715 | orchestrator | + power_state = "active" 2026-02-28 00:02:16.501719 | orchestrator | + region = (known after apply) 2026-02-28 00:02:16.501722 | orchestrator | + security_groups = (known after apply) 2026-02-28 00:02:16.501726 | orchestrator | + stop_before_destroy = false 2026-02-28 00:02:16.501730 | orchestrator | + updated = (known after apply) 2026-02-28 00:02:16.501734 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-02-28 00:02:16.501738 | orchestrator | 2026-02-28 00:02:16.501742 | orchestrator | + block_device { 2026-02-28 00:02:16.501746 | orchestrator | + boot_index = 0 2026-02-28 00:02:16.501749 | orchestrator | + delete_on_termination = false 2026-02-28 00:02:16.501753 | orchestrator | + destination_type = "volume" 2026-02-28 00:02:16.501757 | orchestrator | + multiattach = false 2026-02-28 00:02:16.501761 | orchestrator | + source_type = "volume" 2026-02-28 00:02:16.501765 | orchestrator | + uuid = (known after apply) 2026-02-28 00:02:16.501769 | orchestrator | } 2026-02-28 00:02:16.501772 | orchestrator | 2026-02-28 00:02:16.501776 | orchestrator | + network { 2026-02-28 00:02:16.501780 | orchestrator | + access_network = false 2026-02-28 00:02:16.501784 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-02-28 00:02:16.501788 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-02-28 00:02:16.501807 | orchestrator | + mac = (known after apply) 2026-02-28 00:02:16.501811 | orchestrator | + name = (known after apply) 2026-02-28 00:02:16.501814 | orchestrator | + port = (known after apply) 2026-02-28 00:02:16.501818 | orchestrator | + uuid = (known after apply) 2026-02-28 00:02:16.501822 | orchestrator | } 2026-02-28 00:02:16.501826 | orchestrator | } 2026-02-28 00:02:16.502123 | orchestrator | 2026-02-28 00:02:16.502142 | orchestrator | # openstack_compute_instance_v2.node_server[3] will be created 2026-02-28 00:02:16.502147 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-02-28 00:02:16.502151 | orchestrator | + access_ip_v4 = (known after apply) 2026-02-28 00:02:16.502155 | orchestrator | + access_ip_v6 = (known after apply) 2026-02-28 00:02:16.502158 | orchestrator | + all_metadata = (known after apply) 2026-02-28 00:02:16.502162 | orchestrator | + all_tags = (known after apply) 2026-02-28 00:02:16.502166 | orchestrator | + availability_zone = "nova" 2026-02-28 00:02:16.502170 | orchestrator | + config_drive = true 2026-02-28 00:02:16.502173 | orchestrator | + created = (known after apply) 2026-02-28 00:02:16.502177 | orchestrator | + flavor_id = (known after apply) 2026-02-28 00:02:16.502181 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-02-28 00:02:16.502185 | orchestrator | + force_delete = false 2026-02-28 00:02:16.502188 | orchestrator | + hypervisor_hostname = (known after apply) 2026-02-28 00:02:16.502193 | orchestrator | + id = (known after apply) 2026-02-28 00:02:16.502196 | orchestrator | + image_id = (known after apply) 2026-02-28 00:02:16.502200 | orchestrator | + image_name = (known after apply) 2026-02-28 00:02:16.502204 | orchestrator | + key_pair = "testbed" 2026-02-28 00:02:16.502208 | orchestrator | + name = "testbed-node-3" 2026-02-28 00:02:16.502212 | orchestrator | + power_state = "active" 2026-02-28 00:02:16.502215 | orchestrator | + region = (known after apply) 2026-02-28 00:02:16.502219 | orchestrator | + security_groups = (known after apply) 2026-02-28 00:02:16.502223 | orchestrator | + stop_before_destroy = false 2026-02-28 00:02:16.502227 | orchestrator | + updated = (known after apply) 2026-02-28 00:02:16.502231 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-02-28 00:02:16.502234 | orchestrator | 2026-02-28 00:02:16.502238 | orchestrator | + block_device { 2026-02-28 00:02:16.502246 | orchestrator | + boot_index = 0 2026-02-28 00:02:16.502250 | orchestrator | + delete_on_termination = false 2026-02-28 00:02:16.502254 | orchestrator | + destination_type = "volume" 2026-02-28 00:02:16.502263 | orchestrator | + multiattach = false 2026-02-28 00:02:16.502267 | orchestrator | + source_type = "volume" 2026-02-28 00:02:16.502271 | orchestrator | + uuid = (known after apply) 2026-02-28 00:02:16.502275 | orchestrator | } 2026-02-28 00:02:16.502278 | orchestrator | 2026-02-28 00:02:16.502282 | orchestrator | + network { 2026-02-28 00:02:16.502286 | orchestrator | + access_network = false 2026-02-28 00:02:16.502290 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-02-28 00:02:16.502294 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-02-28 00:02:16.502297 | orchestrator | + mac = (known after apply) 2026-02-28 00:02:16.502301 | orchestrator | + name = (known after apply) 2026-02-28 00:02:16.502305 | orchestrator | + port = (known after apply) 2026-02-28 00:02:16.502309 | orchestrator | + uuid = (known after apply) 2026-02-28 00:02:16.502312 | orchestrator | } 2026-02-28 00:02:16.502316 | orchestrator | } 2026-02-28 00:02:16.502501 | orchestrator | 2026-02-28 00:02:16.502513 | orchestrator | # openstack_compute_instance_v2.node_server[4] will be created 2026-02-28 00:02:16.502517 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-02-28 00:02:16.502521 | orchestrator | + access_ip_v4 = (known after apply) 2026-02-28 00:02:16.502525 | orchestrator | + access_ip_v6 = (known after apply) 2026-02-28 00:02:16.502529 | orchestrator | + all_metadata = (known after apply) 2026-02-28 00:02:16.502533 | orchestrator | + all_tags = (known after apply) 2026-02-28 00:02:16.502537 | orchestrator | + availability_zone = "nova" 2026-02-28 00:02:16.502540 | orchestrator | + config_drive = true 2026-02-28 00:02:16.502544 | orchestrator | + created = (known after apply) 2026-02-28 00:02:16.502548 | orchestrator | + flavor_id = (known after apply) 2026-02-28 00:02:16.502552 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-02-28 00:02:16.502555 | orchestrator | + force_delete = false 2026-02-28 00:02:16.502559 | orchestrator | + hypervisor_hostname = (known after apply) 2026-02-28 00:02:16.502563 | orchestrator | + id = (known after apply) 2026-02-28 00:02:16.502567 | orchestrator | + image_id = (known after apply) 2026-02-28 00:02:16.502570 | orchestrator | + image_name = (known after apply) 2026-02-28 00:02:16.502574 | orchestrator | + key_pair = "testbed" 2026-02-28 00:02:16.502578 | orchestrator | + name = "testbed-node-4" 2026-02-28 00:02:16.502582 | orchestrator | + power_state = "active" 2026-02-28 00:02:16.502586 | orchestrator | + region = (known after apply) 2026-02-28 00:02:16.502589 | orchestrator | + security_groups = (known after apply) 2026-02-28 00:02:16.502593 | orchestrator | + stop_before_destroy = false 2026-02-28 00:02:16.502597 | orchestrator | + updated = (known after apply) 2026-02-28 00:02:16.502601 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-02-28 00:02:16.502604 | orchestrator | 2026-02-28 00:02:16.502608 | orchestrator | + block_device { 2026-02-28 00:02:16.502612 | orchestrator | + boot_index = 0 2026-02-28 00:02:16.502616 | orchestrator | + delete_on_termination = false 2026-02-28 00:02:16.502620 | orchestrator | + destination_type = "volume" 2026-02-28 00:02:16.502624 | orchestrator | + multiattach = false 2026-02-28 00:02:16.502627 | orchestrator | + source_type = "volume" 2026-02-28 00:02:16.502631 | orchestrator | + uuid = (known after apply) 2026-02-28 00:02:16.502635 | orchestrator | } 2026-02-28 00:02:16.502639 | orchestrator | 2026-02-28 00:02:16.502643 | orchestrator | + network { 2026-02-28 00:02:16.502646 | orchestrator | + access_network = false 2026-02-28 00:02:16.502650 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-02-28 00:02:16.502654 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-02-28 00:02:16.502658 | orchestrator | + mac = (known after apply) 2026-02-28 00:02:16.502662 | orchestrator | + name = (known after apply) 2026-02-28 00:02:16.502665 | orchestrator | + port = (known after apply) 2026-02-28 00:02:16.502669 | orchestrator | + uuid = (known after apply) 2026-02-28 00:02:16.502673 | orchestrator | } 2026-02-28 00:02:16.502677 | orchestrator | } 2026-02-28 00:02:16.502906 | orchestrator | 2026-02-28 00:02:16.502920 | orchestrator | # openstack_compute_instance_v2.node_server[5] will be created 2026-02-28 00:02:16.502925 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-02-28 00:02:16.502929 | orchestrator | + access_ip_v4 = (known after apply) 2026-02-28 00:02:16.502932 | orchestrator | + access_ip_v6 = (known after apply) 2026-02-28 00:02:16.502936 | orchestrator | + all_metadata = (known after apply) 2026-02-28 00:02:16.502940 | orchestrator | + all_tags = (known after apply) 2026-02-28 00:02:16.502944 | orchestrator | + availability_zone = "nova" 2026-02-28 00:02:16.502948 | orchestrator | + config_drive = true 2026-02-28 00:02:16.502951 | orchestrator | + created = (known after apply) 2026-02-28 00:02:16.502955 | orchestrator | + flavor_id = (known after apply) 2026-02-28 00:02:16.502959 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-02-28 00:02:16.502963 | orchestrator | + force_delete = false 2026-02-28 00:02:16.502973 | orchestrator | + hypervisor_hostname = (known after apply) 2026-02-28 00:02:16.502977 | orchestrator | + id = (known after apply) 2026-02-28 00:02:16.502981 | orchestrator | + image_id = (known after apply) 2026-02-28 00:02:16.502985 | orchestrator | + image_name = (known after apply) 2026-02-28 00:02:16.502988 | orchestrator | + key_pair = "testbed" 2026-02-28 00:02:16.502992 | orchestrator | + name = "testbed-node-5" 2026-02-28 00:02:16.502996 | orchestrator | + power_state = "active" 2026-02-28 00:02:16.503000 | orchestrator | + region = (known after apply) 2026-02-28 00:02:16.503003 | orchestrator | + security_groups = (known after apply) 2026-02-28 00:02:16.503007 | orchestrator | + stop_before_destroy = false 2026-02-28 00:02:16.503011 | orchestrator | + updated = (known after apply) 2026-02-28 00:02:16.503015 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-02-28 00:02:16.503018 | orchestrator | 2026-02-28 00:02:16.503022 | orchestrator | + block_device { 2026-02-28 00:02:16.503026 | orchestrator | + boot_index = 0 2026-02-28 00:02:16.503030 | orchestrator | + delete_on_termination = false 2026-02-28 00:02:16.503034 | orchestrator | + destination_type = "volume" 2026-02-28 00:02:16.503037 | orchestrator | + multiattach = false 2026-02-28 00:02:16.503041 | orchestrator | + source_type = "volume" 2026-02-28 00:02:16.503045 | orchestrator | + uuid = (known after apply) 2026-02-28 00:02:16.503049 | orchestrator | } 2026-02-28 00:02:16.503053 | orchestrator | 2026-02-28 00:02:16.503057 | orchestrator | + network { 2026-02-28 00:02:16.503060 | orchestrator | + access_network = false 2026-02-28 00:02:16.503064 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-02-28 00:02:16.503068 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-02-28 00:02:16.503072 | orchestrator | + mac = (known after apply) 2026-02-28 00:02:16.503076 | orchestrator | + name = (known after apply) 2026-02-28 00:02:16.503079 | orchestrator | + port = (known after apply) 2026-02-28 00:02:16.503083 | orchestrator | + uuid = (known after apply) 2026-02-28 00:02:16.503087 | orchestrator | } 2026-02-28 00:02:16.503091 | orchestrator | } 2026-02-28 00:02:16.503137 | orchestrator | 2026-02-28 00:02:16.503148 | orchestrator | # openstack_compute_keypair_v2.key will be created 2026-02-28 00:02:16.503152 | orchestrator | + resource "openstack_compute_keypair_v2" "key" { 2026-02-28 00:02:16.503156 | orchestrator | + fingerprint = (known after apply) 2026-02-28 00:02:16.503160 | orchestrator | + id = (known after apply) 2026-02-28 00:02:16.503164 | orchestrator | + name = "testbed" 2026-02-28 00:02:16.503168 | orchestrator | + private_key = (sensitive value) 2026-02-28 00:02:16.503172 | orchestrator | + public_key = (known after apply) 2026-02-28 00:02:16.503176 | orchestrator | + region = (known after apply) 2026-02-28 00:02:16.503179 | orchestrator | + user_id = (known after apply) 2026-02-28 00:02:16.503183 | orchestrator | } 2026-02-28 00:02:16.503223 | orchestrator | 2026-02-28 00:02:16.503234 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2026-02-28 00:02:16.503239 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-28 00:02:16.503247 | orchestrator | + device = (known after apply) 2026-02-28 00:02:16.503251 | orchestrator | + id = (known after apply) 2026-02-28 00:02:16.503254 | orchestrator | + instance_id = (known after apply) 2026-02-28 00:02:16.503258 | orchestrator | + region = (known after apply) 2026-02-28 00:02:16.503262 | orchestrator | + volume_id = (known after apply) 2026-02-28 00:02:16.503266 | orchestrator | } 2026-02-28 00:02:16.503301 | orchestrator | 2026-02-28 00:02:16.503312 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2026-02-28 00:02:16.503317 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-28 00:02:16.503321 | orchestrator | + device = (known after apply) 2026-02-28 00:02:16.503325 | orchestrator | + id = (known after apply) 2026-02-28 00:02:16.503328 | orchestrator | + instance_id = (known after apply) 2026-02-28 00:02:16.503332 | orchestrator | + region = (known after apply) 2026-02-28 00:02:16.503336 | orchestrator | + volume_id = (known after apply) 2026-02-28 00:02:16.503340 | orchestrator | } 2026-02-28 00:02:16.503378 | orchestrator | 2026-02-28 00:02:16.503389 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2026-02-28 00:02:16.503394 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-28 00:02:16.503398 | orchestrator | + device = (known after apply) 2026-02-28 00:02:16.503402 | orchestrator | + id = (known after apply) 2026-02-28 00:02:16.503406 | orchestrator | + instance_id = (known after apply) 2026-02-28 00:02:16.503409 | orchestrator | + region = (known after apply) 2026-02-28 00:02:16.503413 | orchestrator | + volume_id = (known after apply) 2026-02-28 00:02:16.503417 | orchestrator | } 2026-02-28 00:02:16.503451 | orchestrator | 2026-02-28 00:02:16.503462 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2026-02-28 00:02:16.503467 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-28 00:02:16.503471 | orchestrator | + device = (known after apply) 2026-02-28 00:02:16.503474 | orchestrator | + id = (known after apply) 2026-02-28 00:02:16.503478 | orchestrator | + instance_id = (known after apply) 2026-02-28 00:02:16.503482 | orchestrator | + region = (known after apply) 2026-02-28 00:02:16.503486 | orchestrator | + volume_id = (known after apply) 2026-02-28 00:02:16.503490 | orchestrator | } 2026-02-28 00:02:16.503522 | orchestrator | 2026-02-28 00:02:16.503532 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2026-02-28 00:02:16.503537 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-28 00:02:16.503541 | orchestrator | + device = (known after apply) 2026-02-28 00:02:16.503545 | orchestrator | + id = (known after apply) 2026-02-28 00:02:16.503549 | orchestrator | + instance_id = (known after apply) 2026-02-28 00:02:16.503555 | orchestrator | + region = (known after apply) 2026-02-28 00:02:16.503559 | orchestrator | + volume_id = (known after apply) 2026-02-28 00:02:16.503563 | orchestrator | } 2026-02-28 00:02:16.503595 | orchestrator | 2026-02-28 00:02:16.503606 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2026-02-28 00:02:16.503610 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-28 00:02:16.503614 | orchestrator | + device = (known after apply) 2026-02-28 00:02:16.503618 | orchestrator | + id = (known after apply) 2026-02-28 00:02:16.503622 | orchestrator | + instance_id = (known after apply) 2026-02-28 00:02:16.503626 | orchestrator | + region = (known after apply) 2026-02-28 00:02:16.503630 | orchestrator | + volume_id = (known after apply) 2026-02-28 00:02:16.503633 | orchestrator | } 2026-02-28 00:02:16.503667 | orchestrator | 2026-02-28 00:02:16.503678 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2026-02-28 00:02:16.503682 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-28 00:02:16.503686 | orchestrator | + device = (known after apply) 2026-02-28 00:02:16.503690 | orchestrator | + id = (known after apply) 2026-02-28 00:02:16.503694 | orchestrator | + instance_id = (known after apply) 2026-02-28 00:02:16.503698 | orchestrator | + region = (known after apply) 2026-02-28 00:02:16.503706 | orchestrator | + volume_id = (known after apply) 2026-02-28 00:02:16.503709 | orchestrator | } 2026-02-28 00:02:16.503743 | orchestrator | 2026-02-28 00:02:16.503754 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2026-02-28 00:02:16.503758 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-28 00:02:16.503762 | orchestrator | + device = (known after apply) 2026-02-28 00:02:16.503766 | orchestrator | + id = (known after apply) 2026-02-28 00:02:16.503770 | orchestrator | + instance_id = (known after apply) 2026-02-28 00:02:16.503774 | orchestrator | + region = (known after apply) 2026-02-28 00:02:16.503778 | orchestrator | + volume_id = (known after apply) 2026-02-28 00:02:16.503781 | orchestrator | } 2026-02-28 00:02:16.503830 | orchestrator | 2026-02-28 00:02:16.503841 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2026-02-28 00:02:16.503846 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-28 00:02:16.503849 | orchestrator | + device = (known after apply) 2026-02-28 00:02:16.503853 | orchestrator | + id = (known after apply) 2026-02-28 00:02:16.503857 | orchestrator | + instance_id = (known after apply) 2026-02-28 00:02:16.503861 | orchestrator | + region = (known after apply) 2026-02-28 00:02:16.503865 | orchestrator | + volume_id = (known after apply) 2026-02-28 00:02:16.503868 | orchestrator | } 2026-02-28 00:02:16.503903 | orchestrator | 2026-02-28 00:02:16.503914 | orchestrator | # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2026-02-28 00:02:16.503920 | orchestrator | + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2026-02-28 00:02:16.503924 | orchestrator | + fixed_ip = (known after apply) 2026-02-28 00:02:16.503928 | orchestrator | + floating_ip = (known after apply) 2026-02-28 00:02:16.503931 | orchestrator | + id = (known after apply) 2026-02-28 00:02:16.503935 | orchestrator | + port_id = (known after apply) 2026-02-28 00:02:16.503939 | orchestrator | + region = (known after apply) 2026-02-28 00:02:16.503943 | orchestrator | } 2026-02-28 00:02:16.504007 | orchestrator | 2026-02-28 00:02:16.504018 | orchestrator | # openstack_networking_floatingip_v2.manager_floating_ip will be created 2026-02-28 00:02:16.504023 | orchestrator | + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2026-02-28 00:02:16.504026 | orchestrator | + address = (known after apply) 2026-02-28 00:02:16.504030 | orchestrator | + all_tags = (known after apply) 2026-02-28 00:02:16.504034 | orchestrator | + dns_domain = (known after apply) 2026-02-28 00:02:16.504038 | orchestrator | + dns_name = (known after apply) 2026-02-28 00:02:16.504042 | orchestrator | + fixed_ip = (known after apply) 2026-02-28 00:02:16.504046 | orchestrator | + id = (known after apply) 2026-02-28 00:02:16.504049 | orchestrator | + pool = "public" 2026-02-28 00:02:16.504053 | orchestrator | + port_id = (known after apply) 2026-02-28 00:02:16.504057 | orchestrator | + region = (known after apply) 2026-02-28 00:02:16.504061 | orchestrator | + subnet_id = (known after apply) 2026-02-28 00:02:16.504065 | orchestrator | + tenant_id = (known after apply) 2026-02-28 00:02:16.504068 | orchestrator | } 2026-02-28 00:02:16.504155 | orchestrator | 2026-02-28 00:02:16.504166 | orchestrator | # openstack_networking_network_v2.net_management will be created 2026-02-28 00:02:16.504171 | orchestrator | + resource "openstack_networking_network_v2" "net_management" { 2026-02-28 00:02:16.504175 | orchestrator | + admin_state_up = (known after apply) 2026-02-28 00:02:16.504179 | orchestrator | + all_tags = (known after apply) 2026-02-28 00:02:16.504183 | orchestrator | + availability_zone_hints = [ 2026-02-28 00:02:16.504186 | orchestrator | + "nova", 2026-02-28 00:02:16.504190 | orchestrator | ] 2026-02-28 00:02:16.504194 | orchestrator | + dns_domain = (known after apply) 2026-02-28 00:02:16.504198 | orchestrator | + external = (known after apply) 2026-02-28 00:02:16.504202 | orchestrator | + id = (known after apply) 2026-02-28 00:02:16.504206 | orchestrator | + mtu = (known after apply) 2026-02-28 00:02:16.504209 | orchestrator | + name = "net-testbed-management" 2026-02-28 00:02:16.504213 | orchestrator | + port_security_enabled = (known after apply) 2026-02-28 00:02:16.504221 | orchestrator | + qos_policy_id = (known after apply) 2026-02-28 00:02:16.504224 | orchestrator | + region = (known after apply) 2026-02-28 00:02:16.504228 | orchestrator | + shared = (known after apply) 2026-02-28 00:02:16.504232 | orchestrator | + tenant_id = (known after apply) 2026-02-28 00:02:16.504236 | orchestrator | + transparent_vlan = (known after apply) 2026-02-28 00:02:16.504240 | orchestrator | 2026-02-28 00:02:16.504243 | orchestrator | + segments (known after apply) 2026-02-28 00:02:16.504247 | orchestrator | } 2026-02-28 00:02:16.504367 | orchestrator | 2026-02-28 00:02:16.504379 | orchestrator | # openstack_networking_port_v2.manager_port_management will be created 2026-02-28 00:02:16.504383 | orchestrator | + resource "openstack_networking_port_v2" "manager_port_management" { 2026-02-28 00:02:16.504387 | orchestrator | + admin_state_up = (known after apply) 2026-02-28 00:02:16.504391 | orchestrator | + all_fixed_ips = (known after apply) 2026-02-28 00:02:16.504395 | orchestrator | + all_security_group_ids = (known after apply) 2026-02-28 00:02:16.504402 | orchestrator | + all_tags = (known after apply) 2026-02-28 00:02:16.504406 | orchestrator | + device_id = (known after apply) 2026-02-28 00:02:16.504409 | orchestrator | + device_owner = (known after apply) 2026-02-28 00:02:16.504413 | orchestrator | + dns_assignment = (known after apply) 2026-02-28 00:02:16.504417 | orchestrator | + dns_name = (known after apply) 2026-02-28 00:02:16.504421 | orchestrator | + id = (known after apply) 2026-02-28 00:02:16.504425 | orchestrator | + mac_address = (known after apply) 2026-02-28 00:02:16.504428 | orchestrator | + network_id = (known after apply) 2026-02-28 00:02:16.504432 | orchestrator | + port_security_enabled = (known after apply) 2026-02-28 00:02:16.504436 | orchestrator | + qos_policy_id = (known after apply) 2026-02-28 00:02:16.504440 | orchestrator | + region = (known after apply) 2026-02-28 00:02:16.504443 | orchestrator | + security_group_ids = (known after apply) 2026-02-28 00:02:16.504447 | orchestrator | + tenant_id = (known after apply) 2026-02-28 00:02:16.504451 | orchestrator | 2026-02-28 00:02:16.504455 | orchestrator | + allowed_address_pairs { 2026-02-28 00:02:16.504459 | orchestrator | + ip_address = "192.168.16.8/32" 2026-02-28 00:02:16.504463 | orchestrator | } 2026-02-28 00:02:16.504466 | orchestrator | 2026-02-28 00:02:16.504470 | orchestrator | + binding (known after apply) 2026-02-28 00:02:16.504474 | orchestrator | 2026-02-28 00:02:16.504478 | orchestrator | + fixed_ip { 2026-02-28 00:02:16.504482 | orchestrator | + ip_address = "192.168.16.5" 2026-02-28 00:02:16.504486 | orchestrator | + subnet_id = (known after apply) 2026-02-28 00:02:16.504489 | orchestrator | } 2026-02-28 00:02:16.504493 | orchestrator | } 2026-02-28 00:02:16.504624 | orchestrator | 2026-02-28 00:02:16.504635 | orchestrator | # openstack_networking_port_v2.node_port_management[0] will be created 2026-02-28 00:02:16.504640 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-02-28 00:02:16.504644 | orchestrator | + admin_state_up = (known after apply) 2026-02-28 00:02:16.504648 | orchestrator | + all_fixed_ips = (known after apply) 2026-02-28 00:02:16.504652 | orchestrator | + all_security_group_ids = (known after apply) 2026-02-28 00:02:16.504655 | orchestrator | + all_tags = (known after apply) 2026-02-28 00:02:16.504659 | orchestrator | + device_id = (known after apply) 2026-02-28 00:02:16.504663 | orchestrator | + device_owner = (known after apply) 2026-02-28 00:02:16.504667 | orchestrator | + dns_assignment = (known after apply) 2026-02-28 00:02:16.504670 | orchestrator | + dns_name = (known after apply) 2026-02-28 00:02:16.504674 | orchestrator | + id = (known after apply) 2026-02-28 00:02:16.504678 | orchestrator | + mac_address = (known after apply) 2026-02-28 00:02:16.504682 | orchestrator | + network_id = (known after apply) 2026-02-28 00:02:16.504686 | orchestrator | + port_security_enabled = (known after apply) 2026-02-28 00:02:16.504689 | orchestrator | + qos_policy_id = (known after apply) 2026-02-28 00:02:16.504693 | orchestrator | + region = (known after apply) 2026-02-28 00:02:16.504703 | orchestrator | + security_group_ids = (known after apply) 2026-02-28 00:02:16.504707 | orchestrator | + tenant_id = (known after apply) 2026-02-28 00:02:16.504710 | orchestrator | 2026-02-28 00:02:16.504714 | orchestrator | + allowed_address_pairs { 2026-02-28 00:02:16.504718 | orchestrator | + ip_address = "192.168.16.254/32" 2026-02-28 00:02:16.504722 | orchestrator | } 2026-02-28 00:02:16.504726 | orchestrator | + allowed_address_pairs { 2026-02-28 00:02:16.504730 | orchestrator | + ip_address = "192.168.16.8/32" 2026-02-28 00:02:16.504733 | orchestrator | } 2026-02-28 00:02:16.504737 | orchestrator | + allowed_address_pairs { 2026-02-28 00:02:16.504741 | orchestrator | + ip_address = "192.168.16.9/32" 2026-02-28 00:02:16.504745 | orchestrator | } 2026-02-28 00:02:16.504749 | orchestrator | 2026-02-28 00:02:16.504752 | orchestrator | + binding (known after apply) 2026-02-28 00:02:16.504756 | orchestrator | 2026-02-28 00:02:16.504760 | orchestrator | + fixed_ip { 2026-02-28 00:02:16.504764 | orchestrator | + ip_address = "192.168.16.10" 2026-02-28 00:02:16.504768 | orchestrator | + subnet_id = (known after apply) 2026-02-28 00:02:16.504771 | orchestrator | } 2026-02-28 00:02:16.504775 | orchestrator | } 2026-02-28 00:02:16.504964 | orchestrator | 2026-02-28 00:02:16.504979 | orchestrator | # openstack_networking_port_v2.node_port_management[1] will be created 2026-02-28 00:02:16.504984 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-02-28 00:02:16.504988 | orchestrator | + admin_state_up = (known after apply) 2026-02-28 00:02:16.504992 | orchestrator | + all_fixed_ips = (known after apply) 2026-02-28 00:02:16.504996 | orchestrator | + all_security_group_ids = (known after apply) 2026-02-28 00:02:16.505000 | orchestrator | + all_tags = (known after apply) 2026-02-28 00:02:16.505004 | orchestrator | + device_id = (known after apply) 2026-02-28 00:02:16.505008 | orchestrator | + device_owner = (known after apply) 2026-02-28 00:02:16.505011 | orchestrator | + dns_assignment = (known after apply) 2026-02-28 00:02:16.505015 | orchestrator | + dns_name = (known after apply) 2026-02-28 00:02:16.505019 | orchestrator | + id = (known after apply) 2026-02-28 00:02:16.505023 | orchestrator | + mac_address = (known after apply) 2026-02-28 00:02:16.505027 | orchestrator | + network_id = (known after apply) 2026-02-28 00:02:16.505030 | orchestrator | + port_security_enabled = (known after apply) 2026-02-28 00:02:16.505034 | orchestrator | + qos_policy_id = (known after apply) 2026-02-28 00:02:16.505038 | orchestrator | + region = (known after apply) 2026-02-28 00:02:16.505042 | orchestrator | + security_group_ids = (known after apply) 2026-02-28 00:02:16.505045 | orchestrator | + tenant_id = (known after apply) 2026-02-28 00:02:16.505049 | orchestrator | 2026-02-28 00:02:16.505053 | orchestrator | + allowed_address_pairs { 2026-02-28 00:02:16.505057 | orchestrator | + ip_address = "192.168.16.254/32" 2026-02-28 00:02:16.505061 | orchestrator | } 2026-02-28 00:02:16.505065 | orchestrator | + allowed_address_pairs { 2026-02-28 00:02:16.505068 | orchestrator | + ip_address = "192.168.16.8/32" 2026-02-28 00:02:16.505072 | orchestrator | } 2026-02-28 00:02:16.505076 | orchestrator | + allowed_address_pairs { 2026-02-28 00:02:16.505080 | orchestrator | + ip_address = "192.168.16.9/32" 2026-02-28 00:02:16.505083 | orchestrator | } 2026-02-28 00:02:16.505087 | orchestrator | 2026-02-28 00:02:16.505091 | orchestrator | + binding (known after apply) 2026-02-28 00:02:16.505095 | orchestrator | 2026-02-28 00:02:16.505099 | orchestrator | + fixed_ip { 2026-02-28 00:02:16.505102 | orchestrator | + ip_address = "192.168.16.11" 2026-02-28 00:02:16.505106 | orchestrator | + subnet_id = (known after apply) 2026-02-28 00:02:16.505110 | orchestrator | } 2026-02-28 00:02:16.505114 | orchestrator | } 2026-02-28 00:02:16.505258 | orchestrator | 2026-02-28 00:02:16.505269 | orchestrator | # openstack_networking_port_v2.node_port_management[2] will be created 2026-02-28 00:02:16.505274 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-02-28 00:02:16.505278 | orchestrator | + admin_state_up = (known after apply) 2026-02-28 00:02:16.505282 | orchestrator | + all_fixed_ips = (known after apply) 2026-02-28 00:02:16.505285 | orchestrator | + all_security_group_ids = (known after apply) 2026-02-28 00:02:16.505289 | orchestrator | + all_tags = (known after apply) 2026-02-28 00:02:16.505297 | orchestrator | + device_id = (known after apply) 2026-02-28 00:02:16.505301 | orchestrator | + device_owner = (known after apply) 2026-02-28 00:02:16.505305 | orchestrator | + dns_assignment = (known after apply) 2026-02-28 00:02:16.505309 | orchestrator | + dns_name = (known after apply) 2026-02-28 00:02:16.505315 | orchestrator | + id = (known after apply) 2026-02-28 00:02:16.505319 | orchestrator | + mac_address = (known after apply) 2026-02-28 00:02:16.505323 | orchestrator | + network_id = (known after apply) 2026-02-28 00:02:16.505327 | orchestrator | + port_security_enabled = (known after apply) 2026-02-28 00:02:16.505330 | orchestrator | + qos_policy_id = (known after apply) 2026-02-28 00:02:16.505334 | orchestrator | + region = (known after apply) 2026-02-28 00:02:16.505338 | orchestrator | + security_group_ids = (known after apply) 2026-02-28 00:02:16.505342 | orchestrator | + tenant_id = (known after apply) 2026-02-28 00:02:16.505346 | orchestrator | 2026-02-28 00:02:16.505349 | orchestrator | + allowed_address_pairs { 2026-02-28 00:02:16.505353 | orchestrator | + ip_address = "192.168.16.254/32" 2026-02-28 00:02:16.505357 | orchestrator | } 2026-02-28 00:02:16.505361 | orchestrator | + allowed_address_pairs { 2026-02-28 00:02:16.505365 | orchestrator | + ip_address = "192.168.16.8/32" 2026-02-28 00:02:16.505369 | orchestrator | } 2026-02-28 00:02:16.505372 | orchestrator | + allowed_address_pairs { 2026-02-28 00:02:16.505376 | orchestrator | + ip_address = "192.168.16.9/32" 2026-02-28 00:02:16.505380 | orchestrator | } 2026-02-28 00:02:16.505384 | orchestrator | 2026-02-28 00:02:16.505388 | orchestrator | + binding (known after apply) 2026-02-28 00:02:16.505392 | orchestrator | 2026-02-28 00:02:16.505395 | orchestrator | + fixed_ip { 2026-02-28 00:02:16.505400 | orchestrator | + ip_address = "192.168.16.12" 2026-02-28 00:02:16.505403 | orchestrator | + subnet_id = (known after apply) 2026-02-28 00:02:16.505407 | orchestrator | } 2026-02-28 00:02:16.505411 | orchestrator | } 2026-02-28 00:02:16.505551 | orchestrator | 2026-02-28 00:02:16.505562 | orchestrator | # openstack_networking_port_v2.node_port_management[3] will be created 2026-02-28 00:02:16.505567 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-02-28 00:02:16.505571 | orchestrator | + admin_state_up = (known after apply) 2026-02-28 00:02:16.505575 | orchestrator | + all_fixed_ips = (known after apply) 2026-02-28 00:02:16.505579 | orchestrator | + all_security_group_ids = (known after apply) 2026-02-28 00:02:16.505582 | orchestrator | + all_tags = (known after apply) 2026-02-28 00:02:16.505586 | orchestrator | + device_id = (known after apply) 2026-02-28 00:02:16.505590 | orchestrator | + device_owner = (known after apply) 2026-02-28 00:02:16.505594 | orchestrator | + dns_assignment = (known after apply) 2026-02-28 00:02:16.505598 | orchestrator | + dns_name = (known after apply) 2026-02-28 00:02:16.505601 | orchestrator | + id = (known after apply) 2026-02-28 00:02:16.505605 | orchestrator | + mac_address = (known after apply) 2026-02-28 00:02:16.505609 | orchestrator | + network_id = (known after apply) 2026-02-28 00:02:16.505613 | orchestrator | + port_security_enabled = (known after apply) 2026-02-28 00:02:16.505616 | orchestrator | + qos_policy_id = (known after apply) 2026-02-28 00:02:16.505620 | orchestrator | + region = (known after apply) 2026-02-28 00:02:16.505624 | orchestrator | + security_group_ids = (known after apply) 2026-02-28 00:02:16.505628 | orchestrator | + tenant_id = (known after apply) 2026-02-28 00:02:16.505632 | orchestrator | 2026-02-28 00:02:16.505635 | orchestrator | + allowed_address_pairs { 2026-02-28 00:02:16.505639 | orchestrator | + ip_address = "192.168.16.254/32" 2026-02-28 00:02:16.505643 | orchestrator | } 2026-02-28 00:02:16.505647 | orchestrator | + allowed_address_pairs { 2026-02-28 00:02:16.505651 | orchestrator | + ip_address = "192.168.16.8/32" 2026-02-28 00:02:16.505655 | orchestrator | } 2026-02-28 00:02:16.505658 | orchestrator | + allowed_address_pairs { 2026-02-28 00:02:16.505662 | orchestrator | + ip_address = "192.168.16.9/32" 2026-02-28 00:02:16.505666 | orchestrator | } 2026-02-28 00:02:16.505670 | orchestrator | 2026-02-28 00:02:16.505677 | orchestrator | + binding (known after apply) 2026-02-28 00:02:16.505681 | orchestrator | 2026-02-28 00:02:16.505685 | orchestrator | + fixed_ip { 2026-02-28 00:02:16.505689 | orchestrator | + ip_address = "192.168.16.13" 2026-02-28 00:02:16.505692 | orchestrator | + subnet_id = (known after apply) 2026-02-28 00:02:16.505696 | orchestrator | } 2026-02-28 00:02:16.505700 | orchestrator | } 2026-02-28 00:02:16.505865 | orchestrator | 2026-02-28 00:02:16.505878 | orchestrator | # openstack_networking_port_v2.node_port_management[4] will be created 2026-02-28 00:02:16.505882 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-02-28 00:02:16.505886 | orchestrator | + admin_state_up = (known after apply) 2026-02-28 00:02:16.505890 | orchestrator | + all_fixed_ips = (known after apply) 2026-02-28 00:02:16.505894 | orchestrator | + all_security_group_ids = (known after apply) 2026-02-28 00:02:16.505897 | orchestrator | + all_tags = (known after apply) 2026-02-28 00:02:16.505901 | orchestrator | + device_id = (known after apply) 2026-02-28 00:02:16.505905 | orchestrator | + device_owner = (known after apply) 2026-02-28 00:02:16.505909 | orchestrator | + dns_assignment = (known after apply) 2026-02-28 00:02:16.505912 | orchestrator | + dns_name = (known after apply) 2026-02-28 00:02:16.505916 | orchestrator | + id = (known after apply) 2026-02-28 00:02:16.505920 | orchestrator | + mac_address = (known after apply) 2026-02-28 00:02:16.505924 | orchestrator | + network_id = (known after apply) 2026-02-28 00:02:16.505927 | orchestrator | + port_security_enabled = (known after apply) 2026-02-28 00:02:16.505931 | orchestrator | + qos_policy_id = (known after apply) 2026-02-28 00:02:16.505935 | orchestrator | + region = (known after apply) 2026-02-28 00:02:16.505939 | orchestrator | + security_group_ids = (known after apply) 2026-02-28 00:02:16.505943 | orchestrator | + tenant_id = (known after apply) 2026-02-28 00:02:16.505947 | orchestrator | 2026-02-28 00:02:16.505951 | orchestrator | + allowed_address_pairs { 2026-02-28 00:02:16.505955 | orchestrator | + ip_address = "192.168.16.254/32" 2026-02-28 00:02:16.505959 | orchestrator | } 2026-02-28 00:02:16.505963 | orchestrator | + allowed_address_pairs { 2026-02-28 00:02:16.505967 | orchestrator | + ip_address = "192.168.16.8/32" 2026-02-28 00:02:16.505970 | orchestrator | } 2026-02-28 00:02:16.505974 | orchestrator | + allowed_address_pairs { 2026-02-28 00:02:16.505978 | orchestrator | + ip_address = "192.168.16.9/32" 2026-02-28 00:02:16.505982 | orchestrator | } 2026-02-28 00:02:16.505985 | orchestrator | 2026-02-28 00:02:16.505989 | orchestrator | + binding (known after apply) 2026-02-28 00:02:16.505993 | orchestrator | 2026-02-28 00:02:16.505997 | orchestrator | + fixed_ip { 2026-02-28 00:02:16.506001 | orchestrator | + ip_address = "192.168.16.14" 2026-02-28 00:02:16.506005 | orchestrator | + subnet_id = (known after apply) 2026-02-28 00:02:16.506008 | orchestrator | } 2026-02-28 00:02:16.506033 | orchestrator | } 2026-02-28 00:02:16.506174 | orchestrator | 2026-02-28 00:02:16.506185 | orchestrator | # openstack_networking_port_v2.node_port_management[5] will be created 2026-02-28 00:02:16.506190 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-02-28 00:02:16.506194 | orchestrator | + admin_state_up = (known after apply) 2026-02-28 00:02:16.506198 | orchestrator | + all_fixed_ips = (known after apply) 2026-02-28 00:02:16.506202 | orchestrator | + all_security_group_ids = (known after apply) 2026-02-28 00:02:16.506206 | orchestrator | + all_tags = (known after apply) 2026-02-28 00:02:16.506210 | orchestrator | + device_id = (known after apply) 2026-02-28 00:02:16.506214 | orchestrator | + device_owner = (known after apply) 2026-02-28 00:02:16.506217 | orchestrator | + dns_assignment = (known after apply) 2026-02-28 00:02:16.506221 | orchestrator | + dns_name = (known after apply) 2026-02-28 00:02:16.506225 | orchestrator | + id = (known after apply) 2026-02-28 00:02:16.506229 | orchestrator | + mac_address = (known after apply) 2026-02-28 00:02:16.506233 | orchestrator | + network_id = (known after apply) 2026-02-28 00:02:16.506237 | orchestrator | + port_security_enabled = (known after apply) 2026-02-28 00:02:16.506241 | orchestrator | + qos_policy_id = (known after apply) 2026-02-28 00:02:16.506249 | orchestrator | + region = (known after apply) 2026-02-28 00:02:16.506252 | orchestrator | + security_group_ids = (known after apply) 2026-02-28 00:02:16.506256 | orchestrator | + tenant_id = (known after apply) 2026-02-28 00:02:16.506260 | orchestrator | 2026-02-28 00:02:16.506264 | orchestrator | + allowed_address_pairs { 2026-02-28 00:02:16.506268 | orchestrator | + ip_address = "192.168.16.254/32" 2026-02-28 00:02:16.506272 | orchestrator | } 2026-02-28 00:02:16.506275 | orchestrator | + allowed_address_pairs { 2026-02-28 00:02:16.506279 | orchestrator | + ip_address = "192.168.16.8/32" 2026-02-28 00:02:16.506283 | orchestrator | } 2026-02-28 00:02:16.506287 | orchestrator | + allowed_address_pairs { 2026-02-28 00:02:16.506291 | orchestrator | + ip_address = "192.168.16.9/32" 2026-02-28 00:02:16.506295 | orchestrator | } 2026-02-28 00:02:16.506299 | orchestrator | 2026-02-28 00:02:16.506306 | orchestrator | + binding (known after apply) 2026-02-28 00:02:16.506310 | orchestrator | 2026-02-28 00:02:16.506314 | orchestrator | + fixed_ip { 2026-02-28 00:02:16.506318 | orchestrator | + ip_address = "192.168.16.15" 2026-02-28 00:02:16.506322 | orchestrator | + subnet_id = (known after apply) 2026-02-28 00:02:16.506325 | orchestrator | } 2026-02-28 00:02:16.506329 | orchestrator | } 2026-02-28 00:02:16.506371 | orchestrator | 2026-02-28 00:02:16.506382 | orchestrator | # openstack_networking_router_interface_v2.router_interface will be created 2026-02-28 00:02:16.506387 | orchestrator | + resource "openstack_networking_router_interface_v2" "router_interface" { 2026-02-28 00:02:16.506391 | orchestrator | + force_destroy = false 2026-02-28 00:02:16.506395 | orchestrator | + id = (known after apply) 2026-02-28 00:02:16.506398 | orchestrator | + port_id = (known after apply) 2026-02-28 00:02:16.506402 | orchestrator | + region = (known after apply) 2026-02-28 00:02:16.506406 | orchestrator | + router_id = (known after apply) 2026-02-28 00:02:16.506410 | orchestrator | + subnet_id = (known after apply) 2026-02-28 00:02:16.506413 | orchestrator | } 2026-02-28 00:02:16.506491 | orchestrator | 2026-02-28 00:02:16.506502 | orchestrator | # openstack_networking_router_v2.router will be created 2026-02-28 00:02:16.506507 | orchestrator | + resource "openstack_networking_router_v2" "router" { 2026-02-28 00:02:16.506511 | orchestrator | + admin_state_up = (known after apply) 2026-02-28 00:02:16.506515 | orchestrator | + all_tags = (known after apply) 2026-02-28 00:02:16.506518 | orchestrator | + availability_zone_hints = [ 2026-02-28 00:02:16.506522 | orchestrator | + "nova", 2026-02-28 00:02:16.506526 | orchestrator | ] 2026-02-28 00:02:16.506530 | orchestrator | + distributed = (known after apply) 2026-02-28 00:02:16.506534 | orchestrator | + enable_snat = (known after apply) 2026-02-28 00:02:16.506538 | orchestrator | + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2026-02-28 00:02:16.506541 | orchestrator | + external_qos_policy_id = (known after apply) 2026-02-28 00:02:16.506545 | orchestrator | + id = (known after apply) 2026-02-28 00:02:16.506549 | orchestrator | + name = "testbed" 2026-02-28 00:02:16.506553 | orchestrator | + region = (known after apply) 2026-02-28 00:02:16.506557 | orchestrator | + tenant_id = (known after apply) 2026-02-28 00:02:16.506560 | orchestrator | 2026-02-28 00:02:16.506564 | orchestrator | + external_fixed_ip (known after apply) 2026-02-28 00:02:16.506568 | orchestrator | } 2026-02-28 00:02:16.506699 | orchestrator | 2026-02-28 00:02:16.506712 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2026-02-28 00:02:16.506718 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2026-02-28 00:02:16.506722 | orchestrator | + description = "ssh" 2026-02-28 00:02:16.506726 | orchestrator | + direction = "ingress" 2026-02-28 00:02:16.506729 | orchestrator | + ethertype = "IPv4" 2026-02-28 00:02:16.506733 | orchestrator | + id = (known after apply) 2026-02-28 00:02:16.506737 | orchestrator | + port_range_max = 22 2026-02-28 00:02:16.506741 | orchestrator | + port_range_min = 22 2026-02-28 00:02:16.506745 | orchestrator | + protocol = "tcp" 2026-02-28 00:02:16.506749 | orchestrator | + region = (known after apply) 2026-02-28 00:02:16.506757 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-28 00:02:16.506761 | orchestrator | + remote_group_id = (known after apply) 2026-02-28 00:02:16.506765 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-02-28 00:02:16.506768 | orchestrator | + security_group_id = (known after apply) 2026-02-28 00:02:16.506772 | orchestrator | + tenant_id = (known after apply) 2026-02-28 00:02:16.506776 | orchestrator | } 2026-02-28 00:02:16.506870 | orchestrator | 2026-02-28 00:02:16.506882 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2026-02-28 00:02:16.506886 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2026-02-28 00:02:16.506890 | orchestrator | + description = "wireguard" 2026-02-28 00:02:16.506894 | orchestrator | + direction = "ingress" 2026-02-28 00:02:16.506898 | orchestrator | + ethertype = "IPv4" 2026-02-28 00:02:16.506902 | orchestrator | + id = (known after apply) 2026-02-28 00:02:16.506906 | orchestrator | + port_range_max = 51820 2026-02-28 00:02:16.506909 | orchestrator | + port_range_min = 51820 2026-02-28 00:02:16.506913 | orchestrator | + protocol = "udp" 2026-02-28 00:02:16.506917 | orchestrator | + region = (known after apply) 2026-02-28 00:02:16.506921 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-28 00:02:16.506924 | orchestrator | + remote_group_id = (known after apply) 2026-02-28 00:02:16.506928 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-02-28 00:02:16.506932 | orchestrator | + security_group_id = (known after apply) 2026-02-28 00:02:16.506936 | orchestrator | + tenant_id = (known after apply) 2026-02-28 00:02:16.506940 | orchestrator | } 2026-02-28 00:02:16.507001 | orchestrator | 2026-02-28 00:02:16.507012 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2026-02-28 00:02:16.507017 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2026-02-28 00:02:16.507021 | orchestrator | + direction = "ingress" 2026-02-28 00:02:16.507025 | orchestrator | + ethertype = "IPv4" 2026-02-28 00:02:16.507028 | orchestrator | + id = (known after apply) 2026-02-28 00:02:16.507032 | orchestrator | + protocol = "tcp" 2026-02-28 00:02:16.507036 | orchestrator | + region = (known after apply) 2026-02-28 00:02:16.507040 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-28 00:02:16.507044 | orchestrator | + remote_group_id = (known after apply) 2026-02-28 00:02:16.507047 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-02-28 00:02:16.507051 | orchestrator | + security_group_id = (known after apply) 2026-02-28 00:02:16.507055 | orchestrator | + tenant_id = (known after apply) 2026-02-28 00:02:16.507059 | orchestrator | } 2026-02-28 00:02:16.507119 | orchestrator | 2026-02-28 00:02:16.507130 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2026-02-28 00:02:16.507135 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2026-02-28 00:02:16.507139 | orchestrator | + direction = "ingress" 2026-02-28 00:02:16.507142 | orchestrator | + ethertype = "IPv4" 2026-02-28 00:02:16.507146 | orchestrator | + id = (known after apply) 2026-02-28 00:02:16.507150 | orchestrator | + protocol = "udp" 2026-02-28 00:02:16.507154 | orchestrator | + region = (known after apply) 2026-02-28 00:02:16.507157 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-28 00:02:16.507161 | orchestrator | + remote_group_id = (known after apply) 2026-02-28 00:02:16.507165 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-02-28 00:02:16.507169 | orchestrator | + security_group_id = (known after apply) 2026-02-28 00:02:16.507173 | orchestrator | + tenant_id = (known after apply) 2026-02-28 00:02:16.507176 | orchestrator | } 2026-02-28 00:02:16.507235 | orchestrator | 2026-02-28 00:02:16.507245 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2026-02-28 00:02:16.507258 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2026-02-28 00:02:16.507262 | orchestrator | + direction = "ingress" 2026-02-28 00:02:16.507265 | orchestrator | + ethertype = "IPv4" 2026-02-28 00:02:16.507269 | orchestrator | + id = (known after apply) 2026-02-28 00:02:16.507273 | orchestrator | + protocol = "icmp" 2026-02-28 00:02:16.507277 | orchestrator | + region = (known after apply) 2026-02-28 00:02:16.507281 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-28 00:02:16.507285 | orchestrator | + remote_group_id = (known after apply) 2026-02-28 00:02:16.507288 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-02-28 00:02:16.507292 | orchestrator | + security_group_id = (known after apply) 2026-02-28 00:02:16.507296 | orchestrator | + tenant_id = (known after apply) 2026-02-28 00:02:16.507300 | orchestrator | } 2026-02-28 00:02:16.507362 | orchestrator | 2026-02-28 00:02:16.507374 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2026-02-28 00:02:16.507378 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2026-02-28 00:02:16.507382 | orchestrator | + direction = "ingress" 2026-02-28 00:02:16.507386 | orchestrator | + ethertype = "IPv4" 2026-02-28 00:02:16.507390 | orchestrator | + id = (known after apply) 2026-02-28 00:02:16.507394 | orchestrator | + protocol = "tcp" 2026-02-28 00:02:16.507398 | orchestrator | + region = (known after apply) 2026-02-28 00:02:16.507401 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-28 00:02:16.507408 | orchestrator | + remote_group_id = (known after apply) 2026-02-28 00:02:16.507412 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-02-28 00:02:16.507416 | orchestrator | + security_group_id = (known after apply) 2026-02-28 00:02:16.507419 | orchestrator | + tenant_id = (known after apply) 2026-02-28 00:02:16.507423 | orchestrator | } 2026-02-28 00:02:16.507482 | orchestrator | 2026-02-28 00:02:16.507493 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2026-02-28 00:02:16.507497 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2026-02-28 00:02:16.507501 | orchestrator | + direction = "ingress" 2026-02-28 00:02:16.507505 | orchestrator | + ethertype = "IPv4" 2026-02-28 00:02:16.507509 | orchestrator | + id = (known after apply) 2026-02-28 00:02:16.507513 | orchestrator | + protocol = "udp" 2026-02-28 00:02:16.507517 | orchestrator | + region = (known after apply) 2026-02-28 00:02:16.507521 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-28 00:02:16.507524 | orchestrator | + remote_group_id = (known after apply) 2026-02-28 00:02:16.507528 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-02-28 00:02:16.507532 | orchestrator | + security_group_id = (known after apply) 2026-02-28 00:02:16.507536 | orchestrator | + tenant_id = (known after apply) 2026-02-28 00:02:16.507540 | orchestrator | } 2026-02-28 00:02:16.507600 | orchestrator | 2026-02-28 00:02:16.507611 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2026-02-28 00:02:16.507615 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2026-02-28 00:02:16.507619 | orchestrator | + direction = "ingress" 2026-02-28 00:02:16.507625 | orchestrator | + ethertype = "IPv4" 2026-02-28 00:02:16.507629 | orchestrator | + id = (known after apply) 2026-02-28 00:02:16.507633 | orchestrator | + protocol = "icmp" 2026-02-28 00:02:16.507637 | orchestrator | + region = (known after apply) 2026-02-28 00:02:16.507641 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-28 00:02:16.507644 | orchestrator | + remote_group_id = (known after apply) 2026-02-28 00:02:16.507648 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-02-28 00:02:16.507652 | orchestrator | + security_group_id = (known after apply) 2026-02-28 00:02:16.507656 | orchestrator | + tenant_id = (known after apply) 2026-02-28 00:02:16.507663 | orchestrator | } 2026-02-28 00:02:16.507724 | orchestrator | 2026-02-28 00:02:16.507735 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2026-02-28 00:02:16.507739 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2026-02-28 00:02:16.507743 | orchestrator | + description = "vrrp" 2026-02-28 00:02:16.507747 | orchestrator | + direction = "ingress" 2026-02-28 00:02:16.507751 | orchestrator | + ethertype = "IPv4" 2026-02-28 00:02:16.507755 | orchestrator | + id = (known after apply) 2026-02-28 00:02:16.507758 | orchestrator | + protocol = "112" 2026-02-28 00:02:16.507762 | orchestrator | + region = (known after apply) 2026-02-28 00:02:16.507766 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-28 00:02:16.507770 | orchestrator | + remote_group_id = (known after apply) 2026-02-28 00:02:16.507774 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-02-28 00:02:16.507777 | orchestrator | + security_group_id = (known after apply) 2026-02-28 00:02:16.507781 | orchestrator | + tenant_id = (known after apply) 2026-02-28 00:02:16.507785 | orchestrator | } 2026-02-28 00:02:16.507847 | orchestrator | 2026-02-28 00:02:16.507859 | orchestrator | # openstack_networking_secgroup_v2.security_group_management will be created 2026-02-28 00:02:16.507864 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_management" { 2026-02-28 00:02:16.507868 | orchestrator | + all_tags = (known after apply) 2026-02-28 00:02:16.507872 | orchestrator | + description = "management security group" 2026-02-28 00:02:16.507875 | orchestrator | + id = (known after apply) 2026-02-28 00:02:16.507879 | orchestrator | + name = "testbed-management" 2026-02-28 00:02:16.507883 | orchestrator | + region = (known after apply) 2026-02-28 00:02:16.507887 | orchestrator | + stateful = (known after apply) 2026-02-28 00:02:16.507891 | orchestrator | + tenant_id = (known after apply) 2026-02-28 00:02:16.507895 | orchestrator | } 2026-02-28 00:02:16.507940 | orchestrator | 2026-02-28 00:02:16.507951 | orchestrator | # openstack_networking_secgroup_v2.security_group_node will be created 2026-02-28 00:02:16.507956 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_node" { 2026-02-28 00:02:16.507960 | orchestrator | + all_tags = (known after apply) 2026-02-28 00:02:16.507964 | orchestrator | + description = "node security group" 2026-02-28 00:02:16.507967 | orchestrator | + id = (known after apply) 2026-02-28 00:02:16.507971 | orchestrator | + name = "testbed-node" 2026-02-28 00:02:16.507975 | orchestrator | + region = (known after apply) 2026-02-28 00:02:16.507979 | orchestrator | + stateful = (known after apply) 2026-02-28 00:02:16.507983 | orchestrator | + tenant_id = (known after apply) 2026-02-28 00:02:16.507986 | orchestrator | } 2026-02-28 00:02:16.508093 | orchestrator | 2026-02-28 00:02:16.508105 | orchestrator | # openstack_networking_subnet_v2.subnet_management will be created 2026-02-28 00:02:16.508109 | orchestrator | + resource "openstack_networking_subnet_v2" "subnet_management" { 2026-02-28 00:02:16.508113 | orchestrator | + all_tags = (known after apply) 2026-02-28 00:02:16.508117 | orchestrator | + cidr = "192.168.16.0/20" 2026-02-28 00:02:16.508121 | orchestrator | + dns_nameservers = [ 2026-02-28 00:02:16.508125 | orchestrator | + "8.8.8.8", 2026-02-28 00:02:16.508129 | orchestrator | + "9.9.9.9", 2026-02-28 00:02:16.508132 | orchestrator | ] 2026-02-28 00:02:16.508136 | orchestrator | + enable_dhcp = true 2026-02-28 00:02:16.508140 | orchestrator | + gateway_ip = (known after apply) 2026-02-28 00:02:16.508144 | orchestrator | + id = (known after apply) 2026-02-28 00:02:16.508148 | orchestrator | + ip_version = 4 2026-02-28 00:02:16.508151 | orchestrator | + ipv6_address_mode = (known after apply) 2026-02-28 00:02:16.508155 | orchestrator | + ipv6_ra_mode = (known after apply) 2026-02-28 00:02:16.508159 | orchestrator | + name = "subnet-testbed-management" 2026-02-28 00:02:16.508163 | orchestrator | + network_id = (known after apply) 2026-02-28 00:02:16.508167 | orchestrator | + no_gateway = false 2026-02-28 00:02:16.508170 | orchestrator | + region = (known after apply) 2026-02-28 00:02:16.508174 | orchestrator | + service_types = (known after apply) 2026-02-28 00:02:16.508182 | orchestrator | + tenant_id = (known after apply) 2026-02-28 00:02:16.508186 | orchestrator | 2026-02-28 00:02:16.508189 | orchestrator | + allocation_pool { 2026-02-28 00:02:16.508193 | orchestrator | + end = "192.168.31.250" 2026-02-28 00:02:16.508197 | orchestrator | + start = "192.168.31.200" 2026-02-28 00:02:16.508201 | orchestrator | } 2026-02-28 00:02:16.508204 | orchestrator | } 2026-02-28 00:02:16.508235 | orchestrator | 2026-02-28 00:02:16.508246 | orchestrator | # terraform_data.image will be created 2026-02-28 00:02:16.508251 | orchestrator | + resource "terraform_data" "image" { 2026-02-28 00:02:16.508254 | orchestrator | + id = (known after apply) 2026-02-28 00:02:16.508258 | orchestrator | + input = "Ubuntu 24.04" 2026-02-28 00:02:16.508262 | orchestrator | + output = (known after apply) 2026-02-28 00:02:16.508266 | orchestrator | } 2026-02-28 00:02:16.508294 | orchestrator | 2026-02-28 00:02:16.508305 | orchestrator | # terraform_data.image_node will be created 2026-02-28 00:02:16.508310 | orchestrator | + resource "terraform_data" "image_node" { 2026-02-28 00:02:16.508314 | orchestrator | + id = (known after apply) 2026-02-28 00:02:16.508317 | orchestrator | + input = "Ubuntu 24.04" 2026-02-28 00:02:16.508321 | orchestrator | + output = (known after apply) 2026-02-28 00:02:16.508325 | orchestrator | } 2026-02-28 00:02:16.508339 | orchestrator | 2026-02-28 00:02:16.508344 | orchestrator | Plan: 64 to add, 0 to change, 0 to destroy. 2026-02-28 00:02:16.508355 | orchestrator | 2026-02-28 00:02:16.508360 | orchestrator | Changes to Outputs: 2026-02-28 00:02:16.508370 | orchestrator | + manager_address = (sensitive value) 2026-02-28 00:02:16.508374 | orchestrator | + private_key = (sensitive value) 2026-02-28 00:02:16.727242 | orchestrator | terraform_data.image: Creating... 2026-02-28 00:02:16.727601 | orchestrator | terraform_data.image_node: Creating... 2026-02-28 00:02:16.727951 | orchestrator | terraform_data.image: Creation complete after 0s [id=8a9571aa-6484-fd88-0071-c9c16fc64d71] 2026-02-28 00:02:16.729282 | orchestrator | terraform_data.image_node: Creation complete after 0s [id=3bf0a42f-29b1-41e3-ff56-bb75bc30407e] 2026-02-28 00:02:16.748427 | orchestrator | data.openstack_images_image_v2.image: Reading... 2026-02-28 00:02:16.754387 | orchestrator | data.openstack_images_image_v2.image_node: Reading... 2026-02-28 00:02:16.759129 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2026-02-28 00:02:16.760444 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2026-02-28 00:02:16.760486 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2026-02-28 00:02:16.761830 | orchestrator | openstack_compute_keypair_v2.key: Creating... 2026-02-28 00:02:16.762517 | orchestrator | openstack_networking_network_v2.net_management: Creating... 2026-02-28 00:02:16.762677 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2026-02-28 00:02:16.764969 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2026-02-28 00:02:16.765039 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2026-02-28 00:02:17.246573 | orchestrator | data.openstack_images_image_v2.image: Read complete after 0s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-02-28 00:02:17.251494 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2026-02-28 00:02:17.256221 | orchestrator | data.openstack_images_image_v2.image_node: Read complete after 0s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-02-28 00:02:17.259854 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2026-02-28 00:02:17.275814 | orchestrator | openstack_compute_keypair_v2.key: Creation complete after 0s [id=testbed] 2026-02-28 00:02:17.284150 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2026-02-28 00:02:17.835757 | orchestrator | openstack_networking_network_v2.net_management: Creation complete after 1s [id=0a3ef3d4-cd8c-4e88-a8e7-03f099a211cb] 2026-02-28 00:02:17.846173 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2026-02-28 00:02:20.380213 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 3s [id=151bff65-91b8-4b11-a525-96a3d98709b9] 2026-02-28 00:02:20.387956 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2026-02-28 00:02:20.404802 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 3s [id=2388cee9-22a9-4416-93b3-e236454bc031] 2026-02-28 00:02:20.412669 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2026-02-28 00:02:20.415421 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 3s [id=fa3b351c-e54b-439c-bac1-d7e08e27df4b] 2026-02-28 00:02:20.429975 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2026-02-28 00:02:20.440066 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 3s [id=810ccfde-b37c-4538-b69c-a55db736621a] 2026-02-28 00:02:20.455019 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2026-02-28 00:02:20.459930 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 3s [id=9a1bcb93-f154-4a17-8f9d-a00d049f4cc1] 2026-02-28 00:02:20.459972 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 3s [id=aebfdaae-19e6-4277-9533-aca5f477cfa9] 2026-02-28 00:02:20.472590 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2026-02-28 00:02:20.472744 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2026-02-28 00:02:20.512102 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 4s [id=afb4b4ce-eec7-46b2-91b5-87577cac503b] 2026-02-28 00:02:20.524133 | orchestrator | local_sensitive_file.id_rsa: Creating... 2026-02-28 00:02:20.526598 | orchestrator | local_sensitive_file.id_rsa: Creation complete after 0s [id=55f77cdf6a5169852bdbeb5ebb1a8bcb925d63d1] 2026-02-28 00:02:20.530146 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 4s [id=96cb3389-09b8-4702-8328-a447a406a3bc] 2026-02-28 00:02:20.535197 | orchestrator | local_file.id_rsa_pub: Creating... 2026-02-28 00:02:20.538077 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creating... 2026-02-28 00:02:20.540783 | orchestrator | local_file.id_rsa_pub: Creation complete after 0s [id=0b0c8341a05b4e01944fb703485a70339a74f1bf] 2026-02-28 00:02:20.562096 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 4s [id=16fcc6e7-951a-43ed-8f3a-017ae19ace76] 2026-02-28 00:02:21.229385 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 3s [id=50f8361b-a773-4dce-84a5-54d4f5c9ff6b] 2026-02-28 00:02:21.383322 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creation complete after 0s [id=96dbcb2c-ca90-433c-9e18-5613376f4e9c] 2026-02-28 00:02:21.390071 | orchestrator | openstack_networking_router_v2.router: Creating... 2026-02-28 00:02:23.764712 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 4s [id=1d43ef31-86cc-4f7d-aec6-7bed74b0054d] 2026-02-28 00:02:23.788256 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 4s [id=84e1ce59-bd95-40da-9f03-5819b7d1b103] 2026-02-28 00:02:23.818742 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 4s [id=370cf4d2-63bd-48d2-9d3a-0a18fe924203] 2026-02-28 00:02:23.828020 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 4s [id=ea0db2d4-7821-45ce-aa4b-0ff26e9cf878] 2026-02-28 00:02:23.873662 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 4s [id=85345139-bc47-4fee-b6f9-5fb160253b97] 2026-02-28 00:02:23.929958 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 4s [id=ba97938d-26f2-4bf0-9eef-5f523c574980] 2026-02-28 00:02:24.807486 | orchestrator | openstack_networking_router_v2.router: Creation complete after 4s [id=e0500cfd-b452-4b2a-8bab-4b451622470e] 2026-02-28 00:02:24.810580 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creating... 2026-02-28 00:02:24.811504 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creating... 2026-02-28 00:02:24.813404 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creating... 2026-02-28 00:02:25.018961 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creation complete after 0s [id=06cdc40d-dd59-47cf-baaa-9a16a5cba81a] 2026-02-28 00:02:25.038102 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2026-02-28 00:02:25.047202 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2026-02-28 00:02:25.047252 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2026-02-28 00:02:25.047268 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2026-02-28 00:02:25.047273 | orchestrator | openstack_networking_port_v2.manager_port_management: Creating... 2026-02-28 00:02:25.049636 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2026-02-28 00:02:25.080515 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creation complete after 0s [id=743f3556-1152-422b-a158-e3295f9e4c71] 2026-02-28 00:02:25.085718 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2026-02-28 00:02:25.086677 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2026-02-28 00:02:25.086778 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2026-02-28 00:02:25.216611 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 0s [id=1c8c6862-4f45-4696-8a1b-c1cf6dd34393] 2026-02-28 00:02:25.224185 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creating... 2026-02-28 00:02:25.260785 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 0s [id=dc29ee28-35fd-4218-81c7-cafe486a7d2a] 2026-02-28 00:02:25.276039 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creating... 2026-02-28 00:02:25.478136 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 0s [id=7044a5e9-5c4c-4313-8df2-385009dd188b] 2026-02-28 00:02:25.486747 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creating... 2026-02-28 00:02:25.526085 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 1s [id=c29ffbaf-57f2-47da-b516-5099c26da5b3] 2026-02-28 00:02:25.536956 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creating... 2026-02-28 00:02:25.621674 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 1s [id=babbb74c-c564-4dae-a87c-b40076e19b78] 2026-02-28 00:02:25.629967 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creating... 2026-02-28 00:02:25.683556 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 1s [id=43e4eb1d-bab2-416a-b9d4-74b414ac38c2] 2026-02-28 00:02:25.687487 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2026-02-28 00:02:25.860162 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 1s [id=664df5ce-515d-4e72-823e-1c91317ce36b] 2026-02-28 00:02:25.868915 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creating... 2026-02-28 00:02:25.901186 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 0s [id=afb85ed5-907f-456e-8adb-9324d942847b] 2026-02-28 00:02:25.910543 | orchestrator | openstack_networking_port_v2.manager_port_management: Creation complete after 1s [id=6150d2da-40cf-4c1c-83d2-974306e77cb4] 2026-02-28 00:02:25.930248 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creation complete after 1s [id=d60ef23e-3600-4030-98ed-02c69d8b04cc] 2026-02-28 00:02:26.027369 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 1s [id=929c88dc-44be-4e08-9788-71866f6119e6] 2026-02-28 00:02:26.461147 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creation complete after 0s [id=77a2b16e-dae5-472a-a067-620ed04cd208] 2026-02-28 00:02:26.614763 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creation complete after 1s [id=c1cb340e-43ec-4872-96a7-c5ce9fc5b7bc] 2026-02-28 00:02:26.665488 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creation complete after 2s [id=ff0572b5-c230-4d90-b806-e2dfa5764205] 2026-02-28 00:02:26.780452 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creation complete after 2s [id=3e277e0f-4557-42e6-a817-66408d8ef541] 2026-02-28 00:02:26.862340 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creation complete after 1s [id=da34a032-2b4f-459a-a882-b728a9d645ce] 2026-02-28 00:02:27.750496 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creation complete after 3s [id=fc8336a9-be9d-48ee-bee7-bbae14e6b223] 2026-02-28 00:02:27.765912 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2026-02-28 00:02:27.786074 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creating... 2026-02-28 00:02:27.786141 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creating... 2026-02-28 00:02:27.795456 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creating... 2026-02-28 00:02:27.813697 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creating... 2026-02-28 00:02:27.819495 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creating... 2026-02-28 00:02:27.822413 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creating... 2026-02-28 00:02:29.097500 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 1s [id=c0a924e8-fc39-4871-b036-d1648ea66010] 2026-02-28 00:02:29.103231 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2026-02-28 00:02:29.112463 | orchestrator | local_file.inventory: Creating... 2026-02-28 00:02:29.117367 | orchestrator | local_file.inventory: Creation complete after 0s [id=b4610271ab745dd3a0cb9639b3dce398eab41466] 2026-02-28 00:02:29.118081 | orchestrator | local_file.MANAGER_ADDRESS: Creating... 2026-02-28 00:02:29.123361 | orchestrator | local_file.MANAGER_ADDRESS: Creation complete after 0s [id=c9228e1a0d11b70710541785e35a0f4f5ce10c4b] 2026-02-28 00:02:29.964172 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 1s [id=c0a924e8-fc39-4871-b036-d1648ea66010] 2026-02-28 00:02:37.788528 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2026-02-28 00:02:37.788636 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2026-02-28 00:02:37.800022 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2026-02-28 00:02:37.812049 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2026-02-28 00:02:37.825952 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2026-02-28 00:02:37.828161 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2026-02-28 00:02:47.798375 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2026-02-28 00:02:47.798479 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2026-02-28 00:02:47.800606 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2026-02-28 00:02:47.812955 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2026-02-28 00:02:47.826153 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2026-02-28 00:02:47.829287 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2026-02-28 00:02:48.712287 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creation complete after 21s [id=1d2373f4-dbee-45be-8931-847a453e688d] 2026-02-28 00:02:48.757681 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creation complete after 21s [id=6cf0b07b-04e3-41aa-b1fb-8005f37b987f] 2026-02-28 00:02:57.798723 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [30s elapsed] 2026-02-28 00:02:57.801251 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [30s elapsed] 2026-02-28 00:02:57.826564 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [30s elapsed] 2026-02-28 00:02:57.829838 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [30s elapsed] 2026-02-28 00:02:58.795596 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creation complete after 31s [id=779f23ea-59b4-4ed9-bdf2-3c57e6c2591b] 2026-02-28 00:02:59.005107 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creation complete after 31s [id=42d81aa8-ab1b-49c3-b8e8-d9ab84d6d071] 2026-02-28 00:02:59.186259 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creation complete after 31s [id=4f7d8fb4-8a5a-4a71-b1ac-661b78e08a56] 2026-02-28 00:03:07.807171 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [40s elapsed] 2026-02-28 00:03:08.729380 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creation complete after 41s [id=5b8f348a-51b1-43ce-8767-30ce8cf70bf2] 2026-02-28 00:03:08.759010 | orchestrator | null_resource.node_semaphore: Creating... 2026-02-28 00:03:08.760206 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2026-02-28 00:03:08.767935 | orchestrator | null_resource.node_semaphore: Creation complete after 0s [id=7520862807420260166] 2026-02-28 00:03:08.768262 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2026-02-28 00:03:08.778655 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2026-02-28 00:03:08.785218 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2026-02-28 00:03:08.785938 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2026-02-28 00:03:08.791617 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2026-02-28 00:03:08.791667 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2026-02-28 00:03:08.791672 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2026-02-28 00:03:08.799456 | orchestrator | openstack_compute_instance_v2.manager_server: Creating... 2026-02-28 00:03:08.802841 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2026-02-28 00:03:12.173398 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 3s [id=4f7d8fb4-8a5a-4a71-b1ac-661b78e08a56/aebfdaae-19e6-4277-9533-aca5f477cfa9] 2026-02-28 00:03:12.193646 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 3s [id=5b8f348a-51b1-43ce-8767-30ce8cf70bf2/afb4b4ce-eec7-46b2-91b5-87577cac503b] 2026-02-28 00:03:12.248017 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 3s [id=1d2373f4-dbee-45be-8931-847a453e688d/fa3b351c-e54b-439c-bac1-d7e08e27df4b] 2026-02-28 00:03:12.265701 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 3s [id=4f7d8fb4-8a5a-4a71-b1ac-661b78e08a56/810ccfde-b37c-4538-b69c-a55db736621a] 2026-02-28 00:03:12.272494 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 3s [id=5b8f348a-51b1-43ce-8767-30ce8cf70bf2/16fcc6e7-951a-43ed-8f3a-017ae19ace76] 2026-02-28 00:03:12.503463 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 4s [id=1d2373f4-dbee-45be-8931-847a453e688d/151bff65-91b8-4b11-a525-96a3d98709b9] 2026-02-28 00:03:18.358655 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 9s [id=1d2373f4-dbee-45be-8931-847a453e688d/2388cee9-22a9-4416-93b3-e236454bc031] 2026-02-28 00:03:18.362597 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 9s [id=4f7d8fb4-8a5a-4a71-b1ac-661b78e08a56/96cb3389-09b8-4702-8328-a447a406a3bc] 2026-02-28 00:03:18.391198 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 9s [id=5b8f348a-51b1-43ce-8767-30ce8cf70bf2/9a1bcb93-f154-4a17-8f9d-a00d049f4cc1] 2026-02-28 00:03:18.805943 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2026-02-28 00:03:28.806869 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2026-02-28 00:03:29.227393 | orchestrator | openstack_compute_instance_v2.manager_server: Creation complete after 20s [id=698bd01a-95a9-4abd-a663-e90627dbaeb6] 2026-02-28 00:03:29.272674 | orchestrator | 2026-02-28 00:03:29.272838 | orchestrator | Apply complete! Resources: 64 added, 0 changed, 0 destroyed. 2026-02-28 00:03:29.272880 | orchestrator | 2026-02-28 00:03:29.272903 | orchestrator | Outputs: 2026-02-28 00:03:29.272923 | orchestrator | 2026-02-28 00:03:29.272943 | orchestrator | manager_address = 2026-02-28 00:03:29.272964 | orchestrator | private_key = 2026-02-28 00:03:29.529871 | orchestrator | ok: Runtime: 0:01:22.466303 2026-02-28 00:03:29.562590 | 2026-02-28 00:03:29.562718 | TASK [Create infrastructure (stable)] 2026-02-28 00:03:30.100986 | orchestrator | skipping: Conditional result was False 2026-02-28 00:03:30.117595 | 2026-02-28 00:03:30.117760 | TASK [Fetch manager address] 2026-02-28 00:03:30.597723 | orchestrator | ok 2026-02-28 00:03:30.605476 | 2026-02-28 00:03:30.605595 | TASK [Set manager_host address] 2026-02-28 00:03:30.693897 | orchestrator | ok 2026-02-28 00:03:30.704649 | 2026-02-28 00:03:30.704789 | LOOP [Update ansible collections] 2026-02-28 00:03:31.607626 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-02-28 00:03:31.608338 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-02-28 00:03:31.608695 | orchestrator | Starting galaxy collection install process 2026-02-28 00:03:31.608769 | orchestrator | Process install dependency map 2026-02-28 00:03:31.608819 | orchestrator | Starting collection install process 2026-02-28 00:03:31.608864 | orchestrator | Installing 'osism.commons:999.0.0' to '/home/zuul-testbed01/.ansible/collections/ansible_collections/osism/commons' 2026-02-28 00:03:31.608920 | orchestrator | Created collection for osism.commons:999.0.0 at /home/zuul-testbed01/.ansible/collections/ansible_collections/osism/commons 2026-02-28 00:03:31.608986 | orchestrator | osism.commons:999.0.0 was installed successfully 2026-02-28 00:03:31.609106 | orchestrator | ok: Item: commons Runtime: 0:00:00.556139 2026-02-28 00:03:32.499280 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-02-28 00:03:32.499501 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-02-28 00:03:32.499566 | orchestrator | Starting galaxy collection install process 2026-02-28 00:03:32.499607 | orchestrator | Process install dependency map 2026-02-28 00:03:32.499644 | orchestrator | Starting collection install process 2026-02-28 00:03:32.499679 | orchestrator | Installing 'osism.services:999.0.0' to '/home/zuul-testbed01/.ansible/collections/ansible_collections/osism/services' 2026-02-28 00:03:32.499715 | orchestrator | Created collection for osism.services:999.0.0 at /home/zuul-testbed01/.ansible/collections/ansible_collections/osism/services 2026-02-28 00:03:32.499751 | orchestrator | osism.services:999.0.0 was installed successfully 2026-02-28 00:03:32.499809 | orchestrator | ok: Item: services Runtime: 0:00:00.604157 2026-02-28 00:03:32.521284 | 2026-02-28 00:03:32.521425 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-02-28 00:03:43.561707 | orchestrator | ok 2026-02-28 00:03:43.574373 | 2026-02-28 00:03:43.574502 | TASK [Wait a little longer for the manager so that everything is ready] 2026-02-28 00:04:43.619101 | orchestrator | ok 2026-02-28 00:04:43.626499 | 2026-02-28 00:04:43.626606 | TASK [Fetch manager ssh hostkey] 2026-02-28 00:04:45.197987 | orchestrator | Output suppressed because no_log was given 2026-02-28 00:04:45.212981 | 2026-02-28 00:04:45.213161 | TASK [Get ssh keypair from terraform environment] 2026-02-28 00:04:45.753312 | orchestrator | ok: Runtime: 0:00:00.006535 2026-02-28 00:04:45.769876 | 2026-02-28 00:04:45.770047 | TASK [Point out that the following task takes some time and does not give any output] 2026-02-28 00:04:45.819123 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2026-02-28 00:04:45.829327 | 2026-02-28 00:04:45.829494 | TASK [Run manager part 0] 2026-02-28 00:04:46.703018 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-02-28 00:04:46.748721 | orchestrator | 2026-02-28 00:04:46.748803 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2026-02-28 00:04:46.748811 | orchestrator | 2026-02-28 00:04:46.748823 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2026-02-28 00:04:48.990716 | orchestrator | ok: [testbed-manager] 2026-02-28 00:04:48.990833 | orchestrator | 2026-02-28 00:04:48.990889 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-02-28 00:04:48.990914 | orchestrator | 2026-02-28 00:04:48.990939 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-28 00:04:51.247676 | orchestrator | ok: [testbed-manager] 2026-02-28 00:04:51.247742 | orchestrator | 2026-02-28 00:04:51.247776 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-02-28 00:04:51.931778 | orchestrator | ok: [testbed-manager] 2026-02-28 00:04:51.931834 | orchestrator | 2026-02-28 00:04:51.931843 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-02-28 00:04:51.979299 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:04:51.979344 | orchestrator | 2026-02-28 00:04:51.979353 | orchestrator | TASK [Update package cache] **************************************************** 2026-02-28 00:04:52.012465 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:04:52.012549 | orchestrator | 2026-02-28 00:04:52.012566 | orchestrator | TASK [Install required packages] *********************************************** 2026-02-28 00:04:52.045178 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:04:52.045265 | orchestrator | 2026-02-28 00:04:52.045282 | orchestrator | TASK [Remove some python packages] ********************************************* 2026-02-28 00:04:52.086418 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:04:52.086511 | orchestrator | 2026-02-28 00:04:52.086529 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2026-02-28 00:04:52.123624 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:04:52.123684 | orchestrator | 2026-02-28 00:04:52.123695 | orchestrator | TASK [Fail if Ubuntu version is lower than 24.04] ****************************** 2026-02-28 00:04:52.167605 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:04:52.167698 | orchestrator | 2026-02-28 00:04:52.167719 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2026-02-28 00:04:52.210085 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:04:52.210144 | orchestrator | 2026-02-28 00:04:52.210154 | orchestrator | TASK [Set APT options on manager] ********************************************** 2026-02-28 00:04:53.021226 | orchestrator | changed: [testbed-manager] 2026-02-28 00:04:53.021286 | orchestrator | 2026-02-28 00:04:53.021297 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2026-02-28 00:07:52.569483 | orchestrator | changed: [testbed-manager] 2026-02-28 00:07:52.569547 | orchestrator | 2026-02-28 00:07:52.569561 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-02-28 00:09:29.536495 | orchestrator | changed: [testbed-manager] 2026-02-28 00:09:29.536541 | orchestrator | 2026-02-28 00:09:29.536551 | orchestrator | TASK [Install required packages] *********************************************** 2026-02-28 00:09:52.530982 | orchestrator | changed: [testbed-manager] 2026-02-28 00:09:52.531027 | orchestrator | 2026-02-28 00:09:52.531037 | orchestrator | TASK [Remove some python packages] ********************************************* 2026-02-28 00:10:03.165825 | orchestrator | changed: [testbed-manager] 2026-02-28 00:10:03.165861 | orchestrator | 2026-02-28 00:10:03.165867 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-02-28 00:10:03.216278 | orchestrator | ok: [testbed-manager] 2026-02-28 00:10:03.216362 | orchestrator | 2026-02-28 00:10:03.216379 | orchestrator | TASK [Get current user] ******************************************************** 2026-02-28 00:10:04.055534 | orchestrator | ok: [testbed-manager] 2026-02-28 00:10:04.055617 | orchestrator | 2026-02-28 00:10:04.055635 | orchestrator | TASK [Create venv directory] *************************************************** 2026-02-28 00:10:04.821825 | orchestrator | changed: [testbed-manager] 2026-02-28 00:10:04.821895 | orchestrator | 2026-02-28 00:10:04.821906 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2026-02-28 00:10:12.493730 | orchestrator | changed: [testbed-manager] 2026-02-28 00:10:12.493813 | orchestrator | 2026-02-28 00:10:12.493849 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2026-02-28 00:10:18.524015 | orchestrator | changed: [testbed-manager] 2026-02-28 00:10:18.524077 | orchestrator | 2026-02-28 00:10:18.524090 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2026-02-28 00:10:21.347914 | orchestrator | changed: [testbed-manager] 2026-02-28 00:10:21.348004 | orchestrator | 2026-02-28 00:10:21.348020 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2026-02-28 00:10:23.190993 | orchestrator | changed: [testbed-manager] 2026-02-28 00:10:23.191054 | orchestrator | 2026-02-28 00:10:23.191064 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2026-02-28 00:10:24.351353 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-02-28 00:10:24.351451 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-02-28 00:10:24.351466 | orchestrator | 2026-02-28 00:10:24.351479 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2026-02-28 00:10:24.396082 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-02-28 00:10:24.396148 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-02-28 00:10:24.396154 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-02-28 00:10:24.396159 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-02-28 00:10:27.767042 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-02-28 00:10:27.767094 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-02-28 00:10:27.767104 | orchestrator | 2026-02-28 00:10:27.767114 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2026-02-28 00:10:28.387575 | orchestrator | changed: [testbed-manager] 2026-02-28 00:10:28.387709 | orchestrator | 2026-02-28 00:10:28.387733 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2026-02-28 00:11:51.531725 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2026-02-28 00:11:51.531825 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2026-02-28 00:11:51.531842 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2026-02-28 00:11:51.531854 | orchestrator | 2026-02-28 00:11:51.531866 | orchestrator | TASK [Install local collections] *********************************************** 2026-02-28 00:11:54.034618 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2026-02-28 00:11:54.034662 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2026-02-28 00:11:54.034669 | orchestrator | 2026-02-28 00:11:54.034677 | orchestrator | PLAY [Create operator user] **************************************************** 2026-02-28 00:11:54.034685 | orchestrator | 2026-02-28 00:11:54.034693 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-28 00:11:55.487248 | orchestrator | ok: [testbed-manager] 2026-02-28 00:11:55.487348 | orchestrator | 2026-02-28 00:11:55.487379 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-02-28 00:11:55.541246 | orchestrator | ok: [testbed-manager] 2026-02-28 00:11:55.541301 | orchestrator | 2026-02-28 00:11:55.541311 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-02-28 00:11:55.623639 | orchestrator | ok: [testbed-manager] 2026-02-28 00:11:55.623688 | orchestrator | 2026-02-28 00:11:55.623695 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-02-28 00:11:56.443224 | orchestrator | changed: [testbed-manager] 2026-02-28 00:11:56.443260 | orchestrator | 2026-02-28 00:11:56.443267 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-02-28 00:11:57.191470 | orchestrator | changed: [testbed-manager] 2026-02-28 00:11:57.191572 | orchestrator | 2026-02-28 00:11:57.191586 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-02-28 00:11:58.634633 | orchestrator | changed: [testbed-manager] => (item=adm) 2026-02-28 00:11:58.634720 | orchestrator | changed: [testbed-manager] => (item=sudo) 2026-02-28 00:11:58.634738 | orchestrator | 2026-02-28 00:11:58.634768 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-02-28 00:12:00.109065 | orchestrator | changed: [testbed-manager] 2026-02-28 00:12:00.109490 | orchestrator | 2026-02-28 00:12:00.109539 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-02-28 00:12:01.946826 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2026-02-28 00:12:01.946914 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2026-02-28 00:12:01.946927 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2026-02-28 00:12:01.946939 | orchestrator | 2026-02-28 00:12:01.946953 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-02-28 00:12:02.006124 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:12:02.006210 | orchestrator | 2026-02-28 00:12:02.006226 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-02-28 00:12:02.073364 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:12:02.073451 | orchestrator | 2026-02-28 00:12:02.073470 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-02-28 00:12:02.647483 | orchestrator | changed: [testbed-manager] 2026-02-28 00:12:02.647594 | orchestrator | 2026-02-28 00:12:02.647611 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-02-28 00:12:02.725195 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:12:02.725291 | orchestrator | 2026-02-28 00:12:02.725316 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-02-28 00:12:03.619434 | orchestrator | changed: [testbed-manager] => (item=None) 2026-02-28 00:12:03.619558 | orchestrator | changed: [testbed-manager] 2026-02-28 00:12:03.619575 | orchestrator | 2026-02-28 00:12:03.619586 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-02-28 00:12:03.653701 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:12:03.653748 | orchestrator | 2026-02-28 00:12:03.653754 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-02-28 00:12:03.692155 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:12:03.692595 | orchestrator | 2026-02-28 00:12:03.692645 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-02-28 00:12:03.732222 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:12:03.732306 | orchestrator | 2026-02-28 00:12:03.732325 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-02-28 00:12:03.810427 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:12:03.810554 | orchestrator | 2026-02-28 00:12:03.810572 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-02-28 00:12:04.572043 | orchestrator | ok: [testbed-manager] 2026-02-28 00:12:04.572114 | orchestrator | 2026-02-28 00:12:04.572124 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-02-28 00:12:04.572132 | orchestrator | 2026-02-28 00:12:04.572140 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-28 00:12:05.991231 | orchestrator | ok: [testbed-manager] 2026-02-28 00:12:05.991300 | orchestrator | 2026-02-28 00:12:05.991319 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2026-02-28 00:12:06.982582 | orchestrator | changed: [testbed-manager] 2026-02-28 00:12:06.982618 | orchestrator | 2026-02-28 00:12:06.982624 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-28 00:12:06.982630 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=14 rescued=0 ignored=0 2026-02-28 00:12:06.982635 | orchestrator | 2026-02-28 00:12:07.154356 | orchestrator | ok: Runtime: 0:07:20.964791 2026-02-28 00:12:07.170447 | 2026-02-28 00:12:07.170572 | TASK [Point out that the log in on the manager is now possible] 2026-02-28 00:12:07.215142 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2026-02-28 00:12:07.224650 | 2026-02-28 00:12:07.224770 | TASK [Point out that the following task takes some time and does not give any output] 2026-02-28 00:12:07.257742 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2026-02-28 00:12:07.265759 | 2026-02-28 00:12:07.265869 | TASK [Run manager part 1 + 2] 2026-02-28 00:12:08.121830 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-02-28 00:12:08.177173 | orchestrator | 2026-02-28 00:12:08.177254 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2026-02-28 00:12:08.177271 | orchestrator | 2026-02-28 00:12:08.177299 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-28 00:12:11.209648 | orchestrator | ok: [testbed-manager] 2026-02-28 00:12:11.209810 | orchestrator | 2026-02-28 00:12:11.209862 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2026-02-28 00:12:11.246403 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:12:11.246517 | orchestrator | 2026-02-28 00:12:11.246541 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-02-28 00:12:11.286752 | orchestrator | ok: [testbed-manager] 2026-02-28 00:12:11.286833 | orchestrator | 2026-02-28 00:12:11.286850 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-02-28 00:12:11.326982 | orchestrator | ok: [testbed-manager] 2026-02-28 00:12:11.327032 | orchestrator | 2026-02-28 00:12:11.327039 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-02-28 00:12:11.385775 | orchestrator | ok: [testbed-manager] 2026-02-28 00:12:11.385854 | orchestrator | 2026-02-28 00:12:11.385869 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-02-28 00:12:11.453387 | orchestrator | ok: [testbed-manager] 2026-02-28 00:12:11.453443 | orchestrator | 2026-02-28 00:12:11.453450 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-02-28 00:12:11.501094 | orchestrator | included: /home/zuul-testbed01/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2026-02-28 00:12:11.501169 | orchestrator | 2026-02-28 00:12:11.501195 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-02-28 00:12:12.235368 | orchestrator | ok: [testbed-manager] 2026-02-28 00:12:12.235438 | orchestrator | 2026-02-28 00:12:12.235450 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-02-28 00:12:12.284992 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:12:12.285050 | orchestrator | 2026-02-28 00:12:12.285057 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-02-28 00:12:13.735558 | orchestrator | changed: [testbed-manager] 2026-02-28 00:12:13.735658 | orchestrator | 2026-02-28 00:12:13.735676 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-02-28 00:12:14.337403 | orchestrator | ok: [testbed-manager] 2026-02-28 00:12:14.337533 | orchestrator | 2026-02-28 00:12:14.337551 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-02-28 00:12:15.521710 | orchestrator | changed: [testbed-manager] 2026-02-28 00:12:15.521776 | orchestrator | 2026-02-28 00:12:15.521793 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-02-28 00:12:31.538493 | orchestrator | changed: [testbed-manager] 2026-02-28 00:12:31.538552 | orchestrator | 2026-02-28 00:12:31.538559 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-02-28 00:12:32.254469 | orchestrator | ok: [testbed-manager] 2026-02-28 00:12:32.254506 | orchestrator | 2026-02-28 00:12:32.254513 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-02-28 00:12:32.301636 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:12:32.301671 | orchestrator | 2026-02-28 00:12:32.301676 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2026-02-28 00:12:33.294703 | orchestrator | changed: [testbed-manager] 2026-02-28 00:12:33.294931 | orchestrator | 2026-02-28 00:12:33.294967 | orchestrator | TASK [Copy SSH private key] **************************************************** 2026-02-28 00:12:34.310924 | orchestrator | changed: [testbed-manager] 2026-02-28 00:12:34.310966 | orchestrator | 2026-02-28 00:12:34.310973 | orchestrator | TASK [Create configuration directory] ****************************************** 2026-02-28 00:12:34.908889 | orchestrator | changed: [testbed-manager] 2026-02-28 00:12:34.908927 | orchestrator | 2026-02-28 00:12:34.908934 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2026-02-28 00:12:34.952407 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-02-28 00:12:34.952563 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-02-28 00:12:34.952579 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-02-28 00:12:34.952591 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-02-28 00:12:37.149110 | orchestrator | changed: [testbed-manager] 2026-02-28 00:12:37.149153 | orchestrator | 2026-02-28 00:12:37.149373 | orchestrator | TASK [Install python requirements in venv] ************************************* 2026-02-28 00:12:47.795234 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2026-02-28 00:12:47.795277 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2026-02-28 00:12:47.795286 | orchestrator | ok: [testbed-manager] => (item=packaging) 2026-02-28 00:12:47.795292 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2026-02-28 00:12:47.795302 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2026-02-28 00:12:47.795308 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2026-02-28 00:12:47.795314 | orchestrator | 2026-02-28 00:12:47.795320 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2026-02-28 00:12:48.889768 | orchestrator | changed: [testbed-manager] 2026-02-28 00:12:48.889868 | orchestrator | 2026-02-28 00:12:48.889885 | orchestrator | TASK [Copy testbed custom CA certificate on CentOS] **************************** 2026-02-28 00:12:48.934289 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:12:48.934328 | orchestrator | 2026-02-28 00:12:48.934336 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2026-02-28 00:12:52.173003 | orchestrator | changed: [testbed-manager] 2026-02-28 00:12:52.173104 | orchestrator | 2026-02-28 00:12:52.173121 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2026-02-28 00:12:52.217465 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:12:52.217535 | orchestrator | 2026-02-28 00:12:52.217545 | orchestrator | TASK [Run manager part 2] ****************************************************** 2026-02-28 00:14:38.869226 | orchestrator | changed: [testbed-manager] 2026-02-28 00:14:38.869266 | orchestrator | 2026-02-28 00:14:38.869273 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-02-28 00:14:40.073557 | orchestrator | ok: [testbed-manager] 2026-02-28 00:14:40.073597 | orchestrator | 2026-02-28 00:14:40.073604 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-28 00:14:40.073611 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0 2026-02-28 00:14:40.073617 | orchestrator | 2026-02-28 00:14:40.401257 | orchestrator | ok: Runtime: 0:02:32.606913 2026-02-28 00:14:40.419267 | 2026-02-28 00:14:40.419472 | TASK [Reboot manager] 2026-02-28 00:14:41.955473 | orchestrator | ok: Runtime: 0:00:00.965509 2026-02-28 00:14:41.972018 | 2026-02-28 00:14:41.972168 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-02-28 00:14:59.816788 | orchestrator | ok 2026-02-28 00:14:59.824859 | 2026-02-28 00:14:59.824970 | TASK [Wait a little longer for the manager so that everything is ready] 2026-02-28 00:15:59.871535 | orchestrator | ok 2026-02-28 00:15:59.882295 | 2026-02-28 00:15:59.882441 | TASK [Deploy manager + bootstrap nodes] 2026-02-28 00:16:02.497937 | orchestrator | 2026-02-28 00:16:02.498188 | orchestrator | # DEPLOY MANAGER 2026-02-28 00:16:02.498212 | orchestrator | 2026-02-28 00:16:02.498227 | orchestrator | + set -e 2026-02-28 00:16:02.498240 | orchestrator | + echo 2026-02-28 00:16:02.498255 | orchestrator | + echo '# DEPLOY MANAGER' 2026-02-28 00:16:02.498273 | orchestrator | + echo 2026-02-28 00:16:02.498323 | orchestrator | + cat /opt/manager-vars.sh 2026-02-28 00:16:02.501521 | orchestrator | export NUMBER_OF_NODES=6 2026-02-28 00:16:02.501563 | orchestrator | 2026-02-28 00:16:02.501575 | orchestrator | export CEPH_VERSION=reef 2026-02-28 00:16:02.501589 | orchestrator | export CONFIGURATION_VERSION=main 2026-02-28 00:16:02.501602 | orchestrator | export MANAGER_VERSION=latest 2026-02-28 00:16:02.501626 | orchestrator | export OPENSTACK_VERSION=2025.1 2026-02-28 00:16:02.501637 | orchestrator | 2026-02-28 00:16:02.501656 | orchestrator | export ARA=false 2026-02-28 00:16:02.501668 | orchestrator | export DEPLOY_MODE=manager 2026-02-28 00:16:02.501756 | orchestrator | export TEMPEST=true 2026-02-28 00:16:02.501772 | orchestrator | export IS_ZUUL=true 2026-02-28 00:16:02.501784 | orchestrator | 2026-02-28 00:16:02.501802 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.157 2026-02-28 00:16:02.501815 | orchestrator | export EXTERNAL_API=false 2026-02-28 00:16:02.501826 | orchestrator | 2026-02-28 00:16:02.501837 | orchestrator | export IMAGE_USER=ubuntu 2026-02-28 00:16:02.501853 | orchestrator | export IMAGE_NODE_USER=ubuntu 2026-02-28 00:16:02.501864 | orchestrator | 2026-02-28 00:16:02.501875 | orchestrator | export CEPH_STACK=ceph-ansible 2026-02-28 00:16:02.501945 | orchestrator | 2026-02-28 00:16:02.501958 | orchestrator | + echo 2026-02-28 00:16:02.501972 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-28 00:16:02.503134 | orchestrator | ++ export INTERACTIVE=false 2026-02-28 00:16:02.503171 | orchestrator | ++ INTERACTIVE=false 2026-02-28 00:16:02.503192 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-28 00:16:02.503213 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-28 00:16:02.503464 | orchestrator | + source /opt/manager-vars.sh 2026-02-28 00:16:02.503512 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-02-28 00:16:02.503534 | orchestrator | ++ NUMBER_OF_NODES=6 2026-02-28 00:16:02.503597 | orchestrator | ++ export CEPH_VERSION=reef 2026-02-28 00:16:02.503626 | orchestrator | ++ CEPH_VERSION=reef 2026-02-28 00:16:02.503673 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-02-28 00:16:02.503807 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-02-28 00:16:02.503829 | orchestrator | ++ export MANAGER_VERSION=latest 2026-02-28 00:16:02.503849 | orchestrator | ++ MANAGER_VERSION=latest 2026-02-28 00:16:02.503869 | orchestrator | ++ export OPENSTACK_VERSION=2025.1 2026-02-28 00:16:02.503899 | orchestrator | ++ OPENSTACK_VERSION=2025.1 2026-02-28 00:16:02.503912 | orchestrator | ++ export ARA=false 2026-02-28 00:16:02.503929 | orchestrator | ++ ARA=false 2026-02-28 00:16:02.503940 | orchestrator | ++ export DEPLOY_MODE=manager 2026-02-28 00:16:02.503952 | orchestrator | ++ DEPLOY_MODE=manager 2026-02-28 00:16:02.503962 | orchestrator | ++ export TEMPEST=true 2026-02-28 00:16:02.503973 | orchestrator | ++ TEMPEST=true 2026-02-28 00:16:02.503984 | orchestrator | ++ export IS_ZUUL=true 2026-02-28 00:16:02.503995 | orchestrator | ++ IS_ZUUL=true 2026-02-28 00:16:02.504006 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.157 2026-02-28 00:16:02.504017 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.157 2026-02-28 00:16:02.504028 | orchestrator | ++ export EXTERNAL_API=false 2026-02-28 00:16:02.504039 | orchestrator | ++ EXTERNAL_API=false 2026-02-28 00:16:02.504053 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-02-28 00:16:02.504072 | orchestrator | ++ IMAGE_USER=ubuntu 2026-02-28 00:16:02.504124 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-02-28 00:16:02.504139 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-02-28 00:16:02.504164 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-02-28 00:16:02.504184 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-02-28 00:16:02.504202 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2026-02-28 00:16:02.564578 | orchestrator | + docker version 2026-02-28 00:16:02.695404 | orchestrator | Client: Docker Engine - Community 2026-02-28 00:16:02.695504 | orchestrator | Version: 27.5.1 2026-02-28 00:16:02.695519 | orchestrator | API version: 1.47 2026-02-28 00:16:02.695533 | orchestrator | Go version: go1.22.11 2026-02-28 00:16:02.695544 | orchestrator | Git commit: 9f9e405 2026-02-28 00:16:02.695556 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-02-28 00:16:02.695568 | orchestrator | OS/Arch: linux/amd64 2026-02-28 00:16:02.695579 | orchestrator | Context: default 2026-02-28 00:16:02.695590 | orchestrator | 2026-02-28 00:16:02.695602 | orchestrator | Server: Docker Engine - Community 2026-02-28 00:16:02.695613 | orchestrator | Engine: 2026-02-28 00:16:02.695624 | orchestrator | Version: 27.5.1 2026-02-28 00:16:02.695636 | orchestrator | API version: 1.47 (minimum version 1.24) 2026-02-28 00:16:02.695674 | orchestrator | Go version: go1.22.11 2026-02-28 00:16:02.695786 | orchestrator | Git commit: 4c9b3b0 2026-02-28 00:16:02.695800 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-02-28 00:16:02.695812 | orchestrator | OS/Arch: linux/amd64 2026-02-28 00:16:02.695823 | orchestrator | Experimental: false 2026-02-28 00:16:02.695835 | orchestrator | containerd: 2026-02-28 00:16:02.695846 | orchestrator | Version: v2.2.1 2026-02-28 00:16:02.695858 | orchestrator | GitCommit: dea7da592f5d1d2b7755e3a161be07f43fad8f75 2026-02-28 00:16:02.695869 | orchestrator | runc: 2026-02-28 00:16:02.695881 | orchestrator | Version: 1.3.4 2026-02-28 00:16:02.695892 | orchestrator | GitCommit: v1.3.4-0-gd6d73eb8 2026-02-28 00:16:02.695903 | orchestrator | docker-init: 2026-02-28 00:16:02.695913 | orchestrator | Version: 0.19.0 2026-02-28 00:16:02.695925 | orchestrator | GitCommit: de40ad0 2026-02-28 00:16:02.697942 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2026-02-28 00:16:02.706448 | orchestrator | + set -e 2026-02-28 00:16:02.706552 | orchestrator | + source /opt/manager-vars.sh 2026-02-28 00:16:02.706568 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-02-28 00:16:02.706583 | orchestrator | ++ NUMBER_OF_NODES=6 2026-02-28 00:16:02.706595 | orchestrator | ++ export CEPH_VERSION=reef 2026-02-28 00:16:02.706606 | orchestrator | ++ CEPH_VERSION=reef 2026-02-28 00:16:02.706617 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-02-28 00:16:02.706629 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-02-28 00:16:02.706641 | orchestrator | ++ export MANAGER_VERSION=latest 2026-02-28 00:16:02.706652 | orchestrator | ++ MANAGER_VERSION=latest 2026-02-28 00:16:02.706663 | orchestrator | ++ export OPENSTACK_VERSION=2025.1 2026-02-28 00:16:02.706674 | orchestrator | ++ OPENSTACK_VERSION=2025.1 2026-02-28 00:16:02.706685 | orchestrator | ++ export ARA=false 2026-02-28 00:16:02.706722 | orchestrator | ++ ARA=false 2026-02-28 00:16:02.706733 | orchestrator | ++ export DEPLOY_MODE=manager 2026-02-28 00:16:02.706745 | orchestrator | ++ DEPLOY_MODE=manager 2026-02-28 00:16:02.706756 | orchestrator | ++ export TEMPEST=true 2026-02-28 00:16:02.706767 | orchestrator | ++ TEMPEST=true 2026-02-28 00:16:02.706778 | orchestrator | ++ export IS_ZUUL=true 2026-02-28 00:16:02.706789 | orchestrator | ++ IS_ZUUL=true 2026-02-28 00:16:02.706800 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.157 2026-02-28 00:16:02.706811 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.157 2026-02-28 00:16:02.706822 | orchestrator | ++ export EXTERNAL_API=false 2026-02-28 00:16:02.706833 | orchestrator | ++ EXTERNAL_API=false 2026-02-28 00:16:02.706844 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-02-28 00:16:02.706855 | orchestrator | ++ IMAGE_USER=ubuntu 2026-02-28 00:16:02.706866 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-02-28 00:16:02.706877 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-02-28 00:16:02.706888 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-02-28 00:16:02.706899 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-02-28 00:16:02.706910 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-28 00:16:02.706921 | orchestrator | ++ export INTERACTIVE=false 2026-02-28 00:16:02.706932 | orchestrator | ++ INTERACTIVE=false 2026-02-28 00:16:02.706943 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-28 00:16:02.706958 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-28 00:16:02.706970 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-02-28 00:16:02.706981 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-02-28 00:16:02.706992 | orchestrator | + /opt/configuration/scripts/set-ceph-version.sh reef 2026-02-28 00:16:02.710390 | orchestrator | + set -e 2026-02-28 00:16:02.710419 | orchestrator | + VERSION=reef 2026-02-28 00:16:02.711287 | orchestrator | ++ grep '^ceph_version:' /opt/configuration/environments/manager/configuration.yml 2026-02-28 00:16:02.717829 | orchestrator | + [[ -n ceph_version: reef ]] 2026-02-28 00:16:02.717887 | orchestrator | + sed -i 's/ceph_version: .*/ceph_version: reef/g' /opt/configuration/environments/manager/configuration.yml 2026-02-28 00:16:02.721703 | orchestrator | + /opt/configuration/scripts/set-openstack-version.sh 2025.1 2026-02-28 00:16:02.726534 | orchestrator | + set -e 2026-02-28 00:16:02.726580 | orchestrator | + VERSION=2025.1 2026-02-28 00:16:02.727066 | orchestrator | ++ grep '^openstack_version:' /opt/configuration/environments/manager/configuration.yml 2026-02-28 00:16:02.731141 | orchestrator | + [[ -n openstack_version: 2024.2 ]] 2026-02-28 00:16:02.731217 | orchestrator | + sed -i 's/openstack_version: .*/openstack_version: 2025.1/g' /opt/configuration/environments/manager/configuration.yml 2026-02-28 00:16:02.734319 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2026-02-28 00:16:02.734880 | orchestrator | ++ semver latest 7.0.0 2026-02-28 00:16:02.795405 | orchestrator | + [[ -1 -ge 0 ]] 2026-02-28 00:16:02.795499 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-02-28 00:16:02.795525 | orchestrator | + echo 'enable_osism_kubernetes: true' 2026-02-28 00:16:02.796561 | orchestrator | ++ semver latest 10.0.0-0 2026-02-28 00:16:02.853971 | orchestrator | + [[ -1 -ge 0 ]] 2026-02-28 00:16:02.854388 | orchestrator | ++ semver 2025.1 2025.1 2026-02-28 00:16:02.938792 | orchestrator | + [[ 0 -ge 0 ]] 2026-02-28 00:16:02.938888 | orchestrator | + sed -i '/^om_enable_rabbitmq_high_availability:/d' /opt/configuration/environments/kolla/configuration.yml 2026-02-28 00:16:02.947587 | orchestrator | + sed -i '/^om_enable_rabbitmq_quorum_queues:/d' /opt/configuration/environments/kolla/configuration.yml 2026-02-28 00:16:02.951244 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2026-02-28 00:16:03.039033 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-02-28 00:16:03.042320 | orchestrator | + source /opt/venv/bin/activate 2026-02-28 00:16:03.043638 | orchestrator | ++ deactivate nondestructive 2026-02-28 00:16:03.043682 | orchestrator | ++ '[' -n '' ']' 2026-02-28 00:16:03.043726 | orchestrator | ++ '[' -n '' ']' 2026-02-28 00:16:03.043738 | orchestrator | ++ hash -r 2026-02-28 00:16:03.043750 | orchestrator | ++ '[' -n '' ']' 2026-02-28 00:16:03.043761 | orchestrator | ++ unset VIRTUAL_ENV 2026-02-28 00:16:03.043773 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-02-28 00:16:03.043784 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-02-28 00:16:03.043821 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-02-28 00:16:03.043833 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-02-28 00:16:03.043857 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-02-28 00:16:03.043869 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-02-28 00:16:03.043882 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-02-28 00:16:03.043914 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-02-28 00:16:03.043926 | orchestrator | ++ export PATH 2026-02-28 00:16:03.043949 | orchestrator | ++ '[' -n '' ']' 2026-02-28 00:16:03.043965 | orchestrator | ++ '[' -z '' ']' 2026-02-28 00:16:03.043977 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-02-28 00:16:03.043988 | orchestrator | ++ PS1='(venv) ' 2026-02-28 00:16:03.044001 | orchestrator | ++ export PS1 2026-02-28 00:16:03.044012 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-02-28 00:16:03.044023 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-02-28 00:16:03.044035 | orchestrator | ++ hash -r 2026-02-28 00:16:03.044050 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2026-02-28 00:16:04.315234 | orchestrator | 2026-02-28 00:16:04.315329 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2026-02-28 00:16:04.315340 | orchestrator | 2026-02-28 00:16:04.315347 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-02-28 00:16:04.935967 | orchestrator | ok: [testbed-manager] 2026-02-28 00:16:04.936141 | orchestrator | 2026-02-28 00:16:04.936159 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-02-28 00:16:05.981375 | orchestrator | changed: [testbed-manager] 2026-02-28 00:16:05.981480 | orchestrator | 2026-02-28 00:16:05.981498 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2026-02-28 00:16:05.981509 | orchestrator | 2026-02-28 00:16:05.981520 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-28 00:16:08.418171 | orchestrator | ok: [testbed-manager] 2026-02-28 00:16:08.418290 | orchestrator | 2026-02-28 00:16:08.418309 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2026-02-28 00:16:08.477783 | orchestrator | ok: [testbed-manager] 2026-02-28 00:16:08.477893 | orchestrator | 2026-02-28 00:16:08.477911 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2026-02-28 00:16:08.938396 | orchestrator | changed: [testbed-manager] 2026-02-28 00:16:08.938522 | orchestrator | 2026-02-28 00:16:08.938541 | orchestrator | TASK [Add netbox_enable parameter] ********************************************* 2026-02-28 00:16:08.985660 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:16:08.985823 | orchestrator | 2026-02-28 00:16:08.985846 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-02-28 00:16:09.335568 | orchestrator | changed: [testbed-manager] 2026-02-28 00:16:09.335677 | orchestrator | 2026-02-28 00:16:09.335689 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2026-02-28 00:16:09.689217 | orchestrator | ok: [testbed-manager] 2026-02-28 00:16:09.689319 | orchestrator | 2026-02-28 00:16:09.689338 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2026-02-28 00:16:09.813470 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:16:09.813544 | orchestrator | 2026-02-28 00:16:09.813552 | orchestrator | PLAY [Apply role traefik] ****************************************************** 2026-02-28 00:16:09.813558 | orchestrator | 2026-02-28 00:16:09.813563 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-28 00:16:11.633853 | orchestrator | ok: [testbed-manager] 2026-02-28 00:16:11.633968 | orchestrator | 2026-02-28 00:16:11.633986 | orchestrator | TASK [Apply traefik role] ****************************************************** 2026-02-28 00:16:11.767913 | orchestrator | included: osism.services.traefik for testbed-manager 2026-02-28 00:16:11.768012 | orchestrator | 2026-02-28 00:16:11.768027 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2026-02-28 00:16:11.840625 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2026-02-28 00:16:11.840763 | orchestrator | 2026-02-28 00:16:11.840780 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2026-02-28 00:16:13.045106 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2026-02-28 00:16:13.045202 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2026-02-28 00:16:13.045214 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2026-02-28 00:16:13.045223 | orchestrator | 2026-02-28 00:16:13.045234 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2026-02-28 00:16:14.991308 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2026-02-28 00:16:14.991409 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2026-02-28 00:16:14.991425 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2026-02-28 00:16:14.991438 | orchestrator | 2026-02-28 00:16:14.991451 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2026-02-28 00:16:15.650971 | orchestrator | changed: [testbed-manager] => (item=None) 2026-02-28 00:16:15.651077 | orchestrator | changed: [testbed-manager] 2026-02-28 00:16:15.651095 | orchestrator | 2026-02-28 00:16:15.651108 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2026-02-28 00:16:16.387817 | orchestrator | changed: [testbed-manager] => (item=None) 2026-02-28 00:16:16.387913 | orchestrator | changed: [testbed-manager] 2026-02-28 00:16:16.387927 | orchestrator | 2026-02-28 00:16:16.387938 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2026-02-28 00:16:16.452171 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:16:16.452266 | orchestrator | 2026-02-28 00:16:16.452282 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2026-02-28 00:16:16.810327 | orchestrator | ok: [testbed-manager] 2026-02-28 00:16:16.810440 | orchestrator | 2026-02-28 00:16:16.810463 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2026-02-28 00:16:16.894836 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2026-02-28 00:16:16.894949 | orchestrator | 2026-02-28 00:16:16.894990 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2026-02-28 00:16:18.081913 | orchestrator | changed: [testbed-manager] 2026-02-28 00:16:18.082098 | orchestrator | 2026-02-28 00:16:18.082130 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2026-02-28 00:16:18.968085 | orchestrator | changed: [testbed-manager] 2026-02-28 00:16:18.968193 | orchestrator | 2026-02-28 00:16:18.968210 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2026-02-28 00:16:33.025211 | orchestrator | changed: [testbed-manager] 2026-02-28 00:16:33.025321 | orchestrator | 2026-02-28 00:16:33.025341 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2026-02-28 00:16:33.091521 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:16:33.091618 | orchestrator | 2026-02-28 00:16:33.091634 | orchestrator | PLAY [Deploy manager service] ************************************************** 2026-02-28 00:16:33.091688 | orchestrator | 2026-02-28 00:16:33.091727 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-28 00:16:34.957223 | orchestrator | ok: [testbed-manager] 2026-02-28 00:16:34.957326 | orchestrator | 2026-02-28 00:16:34.957342 | orchestrator | TASK [Apply manager role] ****************************************************** 2026-02-28 00:16:35.072392 | orchestrator | included: osism.services.manager for testbed-manager 2026-02-28 00:16:35.072494 | orchestrator | 2026-02-28 00:16:35.072511 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2026-02-28 00:16:35.129104 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2026-02-28 00:16:35.129175 | orchestrator | 2026-02-28 00:16:35.129182 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2026-02-28 00:16:37.704412 | orchestrator | ok: [testbed-manager] 2026-02-28 00:16:37.704511 | orchestrator | 2026-02-28 00:16:37.704530 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2026-02-28 00:16:37.760088 | orchestrator | ok: [testbed-manager] 2026-02-28 00:16:37.760194 | orchestrator | 2026-02-28 00:16:37.760212 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2026-02-28 00:16:37.895195 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2026-02-28 00:16:37.895280 | orchestrator | 2026-02-28 00:16:37.895293 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2026-02-28 00:16:40.746237 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2026-02-28 00:16:40.746335 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2026-02-28 00:16:40.746348 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2026-02-28 00:16:40.746360 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2026-02-28 00:16:40.746371 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2026-02-28 00:16:40.746382 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2026-02-28 00:16:40.746393 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2026-02-28 00:16:40.746404 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2026-02-28 00:16:40.746415 | orchestrator | 2026-02-28 00:16:40.746427 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2026-02-28 00:16:41.387033 | orchestrator | changed: [testbed-manager] 2026-02-28 00:16:41.387138 | orchestrator | 2026-02-28 00:16:41.387155 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2026-02-28 00:16:42.016144 | orchestrator | changed: [testbed-manager] 2026-02-28 00:16:42.016241 | orchestrator | 2026-02-28 00:16:42.016258 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2026-02-28 00:16:42.093557 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2026-02-28 00:16:42.093669 | orchestrator | 2026-02-28 00:16:42.093700 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2026-02-28 00:16:43.337028 | orchestrator | changed: [testbed-manager] => (item=ara) 2026-02-28 00:16:43.337118 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2026-02-28 00:16:43.337129 | orchestrator | 2026-02-28 00:16:43.337141 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2026-02-28 00:16:44.023785 | orchestrator | changed: [testbed-manager] 2026-02-28 00:16:44.023890 | orchestrator | 2026-02-28 00:16:44.023904 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2026-02-28 00:16:44.081239 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:16:44.081334 | orchestrator | 2026-02-28 00:16:44.081351 | orchestrator | TASK [osism.services.manager : Include frontend config tasks] ****************** 2026-02-28 00:16:44.165623 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-frontend.yml for testbed-manager 2026-02-28 00:16:44.165722 | orchestrator | 2026-02-28 00:16:44.165738 | orchestrator | TASK [osism.services.manager : Copy frontend environment file] ***************** 2026-02-28 00:16:44.856700 | orchestrator | changed: [testbed-manager] 2026-02-28 00:16:44.856920 | orchestrator | 2026-02-28 00:16:44.856948 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2026-02-28 00:16:44.922600 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2026-02-28 00:16:44.922687 | orchestrator | 2026-02-28 00:16:44.922699 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2026-02-28 00:16:46.326297 | orchestrator | changed: [testbed-manager] => (item=None) 2026-02-28 00:16:46.326391 | orchestrator | changed: [testbed-manager] => (item=None) 2026-02-28 00:16:46.326404 | orchestrator | changed: [testbed-manager] 2026-02-28 00:16:46.326416 | orchestrator | 2026-02-28 00:16:46.326427 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2026-02-28 00:16:46.980463 | orchestrator | changed: [testbed-manager] 2026-02-28 00:16:46.980596 | orchestrator | 2026-02-28 00:16:46.980614 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2026-02-28 00:16:47.043826 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:16:47.043960 | orchestrator | 2026-02-28 00:16:47.043976 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2026-02-28 00:16:47.157196 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2026-02-28 00:16:47.157307 | orchestrator | 2026-02-28 00:16:47.157327 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2026-02-28 00:16:47.694466 | orchestrator | changed: [testbed-manager] 2026-02-28 00:16:47.694569 | orchestrator | 2026-02-28 00:16:47.694585 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2026-02-28 00:16:48.139053 | orchestrator | changed: [testbed-manager] 2026-02-28 00:16:48.139166 | orchestrator | 2026-02-28 00:16:48.139188 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2026-02-28 00:16:49.408752 | orchestrator | changed: [testbed-manager] => (item=conductor) 2026-02-28 00:16:49.408829 | orchestrator | changed: [testbed-manager] => (item=openstack) 2026-02-28 00:16:49.408877 | orchestrator | 2026-02-28 00:16:49.408893 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2026-02-28 00:16:50.053305 | orchestrator | changed: [testbed-manager] 2026-02-28 00:16:50.053411 | orchestrator | 2026-02-28 00:16:50.053429 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2026-02-28 00:16:50.428769 | orchestrator | ok: [testbed-manager] 2026-02-28 00:16:50.428922 | orchestrator | 2026-02-28 00:16:50.428940 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2026-02-28 00:16:50.804361 | orchestrator | changed: [testbed-manager] 2026-02-28 00:16:50.804438 | orchestrator | 2026-02-28 00:16:50.804450 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2026-02-28 00:16:50.858254 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:16:50.858346 | orchestrator | 2026-02-28 00:16:50.858360 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2026-02-28 00:16:50.934292 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2026-02-28 00:16:50.934411 | orchestrator | 2026-02-28 00:16:50.934430 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2026-02-28 00:16:50.983420 | orchestrator | ok: [testbed-manager] 2026-02-28 00:16:50.983535 | orchestrator | 2026-02-28 00:16:50.983550 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2026-02-28 00:16:53.013405 | orchestrator | changed: [testbed-manager] => (item=osism) 2026-02-28 00:16:53.013521 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2026-02-28 00:16:53.013535 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2026-02-28 00:16:53.013545 | orchestrator | 2026-02-28 00:16:53.013556 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2026-02-28 00:16:53.714291 | orchestrator | changed: [testbed-manager] 2026-02-28 00:16:53.714402 | orchestrator | 2026-02-28 00:16:53.714421 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2026-02-28 00:16:54.449109 | orchestrator | changed: [testbed-manager] 2026-02-28 00:16:54.449249 | orchestrator | 2026-02-28 00:16:54.449267 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2026-02-28 00:16:55.170094 | orchestrator | changed: [testbed-manager] 2026-02-28 00:16:55.170184 | orchestrator | 2026-02-28 00:16:55.170197 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2026-02-28 00:16:55.239280 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2026-02-28 00:16:55.239378 | orchestrator | 2026-02-28 00:16:55.239392 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2026-02-28 00:16:55.280900 | orchestrator | ok: [testbed-manager] 2026-02-28 00:16:55.280996 | orchestrator | 2026-02-28 00:16:55.281008 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2026-02-28 00:16:55.981473 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2026-02-28 00:16:55.981581 | orchestrator | 2026-02-28 00:16:55.981596 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2026-02-28 00:16:56.058712 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2026-02-28 00:16:56.058808 | orchestrator | 2026-02-28 00:16:56.058820 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2026-02-28 00:16:56.770296 | orchestrator | changed: [testbed-manager] 2026-02-28 00:16:56.770406 | orchestrator | 2026-02-28 00:16:56.770423 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2026-02-28 00:16:57.387747 | orchestrator | ok: [testbed-manager] 2026-02-28 00:16:57.387843 | orchestrator | 2026-02-28 00:16:57.387858 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2026-02-28 00:16:57.450126 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:16:57.450250 | orchestrator | 2026-02-28 00:16:57.450276 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2026-02-28 00:16:57.504308 | orchestrator | ok: [testbed-manager] 2026-02-28 00:16:57.504411 | orchestrator | 2026-02-28 00:16:57.504427 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2026-02-28 00:16:58.329665 | orchestrator | changed: [testbed-manager] 2026-02-28 00:16:58.329766 | orchestrator | 2026-02-28 00:16:58.329783 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2026-02-28 00:18:18.210541 | orchestrator | changed: [testbed-manager] 2026-02-28 00:18:18.210659 | orchestrator | 2026-02-28 00:18:18.210676 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2026-02-28 00:18:19.307604 | orchestrator | ok: [testbed-manager] 2026-02-28 00:18:19.307708 | orchestrator | 2026-02-28 00:18:19.307725 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2026-02-28 00:18:19.371019 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:18:19.371162 | orchestrator | 2026-02-28 00:18:19.371203 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2026-02-28 00:18:53.616047 | orchestrator | changed: [testbed-manager] 2026-02-28 00:18:53.616243 | orchestrator | 2026-02-28 00:18:53.616278 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2026-02-28 00:18:53.729351 | orchestrator | ok: [testbed-manager] 2026-02-28 00:18:53.729463 | orchestrator | 2026-02-28 00:18:53.729482 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-02-28 00:18:53.729496 | orchestrator | 2026-02-28 00:18:53.729507 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2026-02-28 00:18:53.785680 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:18:53.785773 | orchestrator | 2026-02-28 00:18:53.785788 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2026-02-28 00:19:53.848980 | orchestrator | Pausing for 60 seconds 2026-02-28 00:19:53.849090 | orchestrator | changed: [testbed-manager] 2026-02-28 00:19:53.849106 | orchestrator | 2026-02-28 00:19:53.849121 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2026-02-28 00:19:57.025973 | orchestrator | changed: [testbed-manager] 2026-02-28 00:19:57.026174 | orchestrator | 2026-02-28 00:19:57.026196 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2026-02-28 00:20:59.246583 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2026-02-28 00:20:59.246725 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2026-02-28 00:20:59.246752 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (48 retries left). 2026-02-28 00:20:59.246774 | orchestrator | changed: [testbed-manager] 2026-02-28 00:20:59.246796 | orchestrator | 2026-02-28 00:20:59.246816 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2026-02-28 00:21:10.283742 | orchestrator | changed: [testbed-manager] 2026-02-28 00:21:10.283845 | orchestrator | 2026-02-28 00:21:10.283862 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2026-02-28 00:21:10.363925 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2026-02-28 00:21:10.364032 | orchestrator | 2026-02-28 00:21:10.364048 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-02-28 00:21:10.364061 | orchestrator | 2026-02-28 00:21:10.364073 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2026-02-28 00:21:10.418364 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:21:10.418464 | orchestrator | 2026-02-28 00:21:10.418539 | orchestrator | TASK [osism.services.manager : Include version verification tasks] ************* 2026-02-28 00:21:10.499883 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/verify-versions.yml for testbed-manager 2026-02-28 00:21:10.499980 | orchestrator | 2026-02-28 00:21:10.499996 | orchestrator | TASK [osism.services.manager : Deploy service manager version check script] **** 2026-02-28 00:21:11.302742 | orchestrator | changed: [testbed-manager] 2026-02-28 00:21:11.302845 | orchestrator | 2026-02-28 00:21:11.302863 | orchestrator | TASK [osism.services.manager : Execute service manager version check] ********** 2026-02-28 00:21:14.713364 | orchestrator | ok: [testbed-manager] 2026-02-28 00:21:14.713457 | orchestrator | 2026-02-28 00:21:14.713474 | orchestrator | TASK [osism.services.manager : Display version check results] ****************** 2026-02-28 00:21:14.788389 | orchestrator | ok: [testbed-manager] => { 2026-02-28 00:21:14.788481 | orchestrator | "version_check_result.stdout_lines": [ 2026-02-28 00:21:14.788544 | orchestrator | "=== OSISM Container Version Check ===", 2026-02-28 00:21:14.788557 | orchestrator | "Checking running containers against expected versions...", 2026-02-28 00:21:14.788569 | orchestrator | "", 2026-02-28 00:21:14.788581 | orchestrator | "Checking service: inventory_reconciler (Inventory Reconciler Service)", 2026-02-28 00:21:14.788593 | orchestrator | " Expected: registry.osism.tech/osism/inventory-reconciler:latest", 2026-02-28 00:21:14.788605 | orchestrator | " Enabled: true", 2026-02-28 00:21:14.788616 | orchestrator | " Running: registry.osism.tech/osism/inventory-reconciler:latest", 2026-02-28 00:21:14.788627 | orchestrator | " Status: ✅ MATCH", 2026-02-28 00:21:14.788639 | orchestrator | "", 2026-02-28 00:21:14.788650 | orchestrator | "Checking service: osism-ansible (OSISM Ansible Service)", 2026-02-28 00:21:14.788662 | orchestrator | " Expected: registry.osism.tech/osism/osism-ansible:latest", 2026-02-28 00:21:14.788673 | orchestrator | " Enabled: true", 2026-02-28 00:21:14.788684 | orchestrator | " Running: registry.osism.tech/osism/osism-ansible:latest", 2026-02-28 00:21:14.788695 | orchestrator | " Status: ✅ MATCH", 2026-02-28 00:21:14.788707 | orchestrator | "", 2026-02-28 00:21:14.788718 | orchestrator | "Checking service: osism-kubernetes (Osism-Kubernetes Service)", 2026-02-28 00:21:14.788729 | orchestrator | " Expected: registry.osism.tech/osism/osism-kubernetes:latest", 2026-02-28 00:21:14.788740 | orchestrator | " Enabled: true", 2026-02-28 00:21:14.788751 | orchestrator | " Running: registry.osism.tech/osism/osism-kubernetes:latest", 2026-02-28 00:21:14.788762 | orchestrator | " Status: ✅ MATCH", 2026-02-28 00:21:14.788774 | orchestrator | "", 2026-02-28 00:21:14.788785 | orchestrator | "Checking service: ceph-ansible (Ceph-Ansible Service)", 2026-02-28 00:21:14.788797 | orchestrator | " Expected: registry.osism.tech/osism/ceph-ansible:reef", 2026-02-28 00:21:14.788808 | orchestrator | " Enabled: true", 2026-02-28 00:21:14.788847 | orchestrator | " Running: registry.osism.tech/osism/ceph-ansible:reef", 2026-02-28 00:21:14.788859 | orchestrator | " Status: ✅ MATCH", 2026-02-28 00:21:14.788870 | orchestrator | "", 2026-02-28 00:21:14.788882 | orchestrator | "Checking service: kolla-ansible (Kolla-Ansible Service)", 2026-02-28 00:21:14.788900 | orchestrator | " Expected: registry.osism.tech/osism/kolla-ansible:2025.1", 2026-02-28 00:21:14.788920 | orchestrator | " Enabled: true", 2026-02-28 00:21:14.788939 | orchestrator | " Running: registry.osism.tech/osism/kolla-ansible:2025.1", 2026-02-28 00:21:14.788956 | orchestrator | " Status: ✅ MATCH", 2026-02-28 00:21:14.788974 | orchestrator | "", 2026-02-28 00:21:14.788994 | orchestrator | "Checking service: osismclient (OSISM Client)", 2026-02-28 00:21:14.789012 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-02-28 00:21:14.789033 | orchestrator | " Enabled: true", 2026-02-28 00:21:14.789048 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-02-28 00:21:14.789062 | orchestrator | " Status: ✅ MATCH", 2026-02-28 00:21:14.789075 | orchestrator | "", 2026-02-28 00:21:14.789088 | orchestrator | "Checking service: ara-server (ARA Server)", 2026-02-28 00:21:14.789101 | orchestrator | " Expected: registry.osism.tech/osism/ara-server:1.7.3", 2026-02-28 00:21:14.789114 | orchestrator | " Enabled: true", 2026-02-28 00:21:14.789126 | orchestrator | " Running: registry.osism.tech/osism/ara-server:1.7.3", 2026-02-28 00:21:14.789139 | orchestrator | " Status: ✅ MATCH", 2026-02-28 00:21:14.789151 | orchestrator | "", 2026-02-28 00:21:14.789163 | orchestrator | "Checking service: mariadb (MariaDB for ARA)", 2026-02-28 00:21:14.789186 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-02-28 00:21:14.789199 | orchestrator | " Enabled: true", 2026-02-28 00:21:14.789212 | orchestrator | " Running: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-02-28 00:21:14.789224 | orchestrator | " Status: ✅ MATCH", 2026-02-28 00:21:14.789241 | orchestrator | "", 2026-02-28 00:21:14.789254 | orchestrator | "Checking service: frontend (OSISM Frontend)", 2026-02-28 00:21:14.789267 | orchestrator | " Expected: registry.osism.tech/osism/osism-frontend:latest", 2026-02-28 00:21:14.789279 | orchestrator | " Enabled: true", 2026-02-28 00:21:14.789290 | orchestrator | " Running: registry.osism.tech/osism/osism-frontend:latest", 2026-02-28 00:21:14.789301 | orchestrator | " Status: ✅ MATCH", 2026-02-28 00:21:14.789312 | orchestrator | "", 2026-02-28 00:21:14.789323 | orchestrator | "Checking service: redis (Redis Cache)", 2026-02-28 00:21:14.789334 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-02-28 00:21:14.789345 | orchestrator | " Enabled: true", 2026-02-28 00:21:14.789356 | orchestrator | " Running: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-02-28 00:21:14.789367 | orchestrator | " Status: ✅ MATCH", 2026-02-28 00:21:14.789378 | orchestrator | "", 2026-02-28 00:21:14.789389 | orchestrator | "Checking service: api (OSISM API Service)", 2026-02-28 00:21:14.789400 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-02-28 00:21:14.789411 | orchestrator | " Enabled: true", 2026-02-28 00:21:14.789422 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-02-28 00:21:14.789433 | orchestrator | " Status: ✅ MATCH", 2026-02-28 00:21:14.789444 | orchestrator | "", 2026-02-28 00:21:14.789454 | orchestrator | "Checking service: listener (OpenStack Event Listener)", 2026-02-28 00:21:14.789465 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-02-28 00:21:14.789476 | orchestrator | " Enabled: true", 2026-02-28 00:21:14.789507 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-02-28 00:21:14.789519 | orchestrator | " Status: ✅ MATCH", 2026-02-28 00:21:14.789530 | orchestrator | "", 2026-02-28 00:21:14.789541 | orchestrator | "Checking service: openstack (OpenStack Integration)", 2026-02-28 00:21:14.789552 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-02-28 00:21:14.789563 | orchestrator | " Enabled: true", 2026-02-28 00:21:14.789574 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-02-28 00:21:14.789585 | orchestrator | " Status: ✅ MATCH", 2026-02-28 00:21:14.789607 | orchestrator | "", 2026-02-28 00:21:14.789619 | orchestrator | "Checking service: beat (Celery Beat Scheduler)", 2026-02-28 00:21:14.789630 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-02-28 00:21:14.789641 | orchestrator | " Enabled: true", 2026-02-28 00:21:14.789652 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-02-28 00:21:14.789663 | orchestrator | " Status: ✅ MATCH", 2026-02-28 00:21:14.789674 | orchestrator | "", 2026-02-28 00:21:14.789685 | orchestrator | "Checking service: flower (Celery Flower Monitor)", 2026-02-28 00:21:14.789715 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-02-28 00:21:14.789727 | orchestrator | " Enabled: true", 2026-02-28 00:21:14.789738 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-02-28 00:21:14.789749 | orchestrator | " Status: ✅ MATCH", 2026-02-28 00:21:14.789760 | orchestrator | "", 2026-02-28 00:21:14.789771 | orchestrator | "=== Summary ===", 2026-02-28 00:21:14.789782 | orchestrator | "Errors (version mismatches): 0", 2026-02-28 00:21:14.789793 | orchestrator | "Warnings (expected containers not running): 0", 2026-02-28 00:21:14.789804 | orchestrator | "", 2026-02-28 00:21:14.789815 | orchestrator | "✅ All running containers match expected versions!" 2026-02-28 00:21:14.789827 | orchestrator | ] 2026-02-28 00:21:14.789838 | orchestrator | } 2026-02-28 00:21:14.789849 | orchestrator | 2026-02-28 00:21:14.789861 | orchestrator | TASK [osism.services.manager : Skip version check due to service configuration] *** 2026-02-28 00:21:14.833736 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:21:14.833821 | orchestrator | 2026-02-28 00:21:14.833837 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-28 00:21:14.833853 | orchestrator | testbed-manager : ok=70 changed=37 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 2026-02-28 00:21:14.833864 | orchestrator | 2026-02-28 00:21:14.938083 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-02-28 00:21:14.938176 | orchestrator | + deactivate 2026-02-28 00:21:14.938192 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-02-28 00:21:14.938205 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-02-28 00:21:14.938216 | orchestrator | + export PATH 2026-02-28 00:21:14.938228 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-02-28 00:21:14.938240 | orchestrator | + '[' -n '' ']' 2026-02-28 00:21:14.938251 | orchestrator | + hash -r 2026-02-28 00:21:14.938262 | orchestrator | + '[' -n '' ']' 2026-02-28 00:21:14.938273 | orchestrator | + unset VIRTUAL_ENV 2026-02-28 00:21:14.938284 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-02-28 00:21:14.938295 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-02-28 00:21:14.938306 | orchestrator | + unset -f deactivate 2026-02-28 00:21:14.938318 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2026-02-28 00:21:14.946424 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-02-28 00:21:14.946536 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-02-28 00:21:14.946563 | orchestrator | + local max_attempts=60 2026-02-28 00:21:14.946586 | orchestrator | + local name=ceph-ansible 2026-02-28 00:21:14.946605 | orchestrator | + local attempt_num=1 2026-02-28 00:21:14.947568 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-28 00:21:14.975070 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-02-28 00:21:14.975150 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-02-28 00:21:14.975164 | orchestrator | + local max_attempts=60 2026-02-28 00:21:14.975175 | orchestrator | + local name=kolla-ansible 2026-02-28 00:21:14.975185 | orchestrator | + local attempt_num=1 2026-02-28 00:21:14.975666 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-02-28 00:21:15.011558 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-02-28 00:21:15.011640 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-02-28 00:21:15.011656 | orchestrator | + local max_attempts=60 2026-02-28 00:21:15.011669 | orchestrator | + local name=osism-ansible 2026-02-28 00:21:15.011681 | orchestrator | + local attempt_num=1 2026-02-28 00:21:15.012821 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-02-28 00:21:15.055476 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-02-28 00:21:15.055635 | orchestrator | + [[ true == \t\r\u\e ]] 2026-02-28 00:21:15.055686 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-02-28 00:21:15.786909 | orchestrator | + docker compose --project-directory /opt/manager ps 2026-02-28 00:21:15.962302 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2026-02-28 00:21:15.962414 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:reef "/entrypoint.sh osis…" ceph-ansible 2 minutes ago Up About a minute (healthy) 2026-02-28 00:21:15.962432 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:2025.1 "/entrypoint.sh osis…" kolla-ansible 2 minutes ago Up About a minute (healthy) 2026-02-28 00:21:15.962444 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" api 2 minutes ago Up 2 minutes (healthy) 192.168.16.5:8000->8000/tcp 2026-02-28 00:21:15.962457 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" ara-server 2 minutes ago Up 2 minutes (healthy) 8000/tcp 2026-02-28 00:21:15.962470 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" beat 2 minutes ago Up 2 minutes (healthy) 2026-02-28 00:21:15.962481 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" flower 2 minutes ago Up 2 minutes (healthy) 2026-02-28 00:21:15.962549 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:latest "/sbin/tini -- /entr…" inventory_reconciler 2 minutes ago Up About a minute (healthy) 2026-02-28 00:21:15.962562 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" listener 2 minutes ago Up 2 minutes (healthy) 2026-02-28 00:21:15.962574 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.8.4 "docker-entrypoint.s…" mariadb 2 minutes ago Up 2 minutes (healthy) 3306/tcp 2026-02-28 00:21:15.962585 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" openstack 2 minutes ago Up 2 minutes (healthy) 2026-02-28 00:21:15.962596 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.7-alpine "docker-entrypoint.s…" redis 2 minutes ago Up 2 minutes (healthy) 6379/tcp 2026-02-28 00:21:15.962607 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:latest "/entrypoint.sh osis…" osism-ansible 2 minutes ago Up About a minute (healthy) 2026-02-28 00:21:15.962618 | orchestrator | osism-frontend registry.osism.tech/osism/osism-frontend:latest "docker-entrypoint.s…" frontend 2 minutes ago Up 2 minutes 192.168.16.5:3000->3000/tcp 2026-02-28 00:21:15.962629 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:latest "/entrypoint.sh osis…" osism-kubernetes 2 minutes ago Up About a minute (healthy) 2026-02-28 00:21:15.962640 | orchestrator | osismclient registry.osism.tech/osism/osism:latest "/sbin/tini -- sleep…" osismclient 2 minutes ago Up 2 minutes (healthy) 2026-02-28 00:21:15.972252 | orchestrator | ++ semver latest 7.0.0 2026-02-28 00:21:16.030830 | orchestrator | + [[ -1 -ge 0 ]] 2026-02-28 00:21:16.030939 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-02-28 00:21:16.030957 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2026-02-28 00:21:16.034230 | orchestrator | + osism apply resolvconf -l testbed-manager 2026-02-28 00:21:28.123044 | orchestrator | 2026-02-28 00:21:28 | INFO  | Prepare task for execution of resolvconf. 2026-02-28 00:21:28.353821 | orchestrator | 2026-02-28 00:21:28 | INFO  | Task 8f9179ef-4b97-4f70-9017-8ae5b57ecb3d (resolvconf) was prepared for execution. 2026-02-28 00:21:28.353932 | orchestrator | 2026-02-28 00:21:28 | INFO  | It takes a moment until task 8f9179ef-4b97-4f70-9017-8ae5b57ecb3d (resolvconf) has been started and output is visible here. 2026-02-28 00:21:43.730612 | orchestrator | 2026-02-28 00:21:43.730725 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2026-02-28 00:21:43.730742 | orchestrator | 2026-02-28 00:21:43.730755 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-28 00:21:43.730766 | orchestrator | Saturday 28 February 2026 00:21:32 +0000 (0:00:00.151) 0:00:00.151 ***** 2026-02-28 00:21:43.730777 | orchestrator | ok: [testbed-manager] 2026-02-28 00:21:43.730789 | orchestrator | 2026-02-28 00:21:43.730801 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-02-28 00:21:43.730812 | orchestrator | Saturday 28 February 2026 00:21:37 +0000 (0:00:04.912) 0:00:05.063 ***** 2026-02-28 00:21:43.730823 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:21:43.730835 | orchestrator | 2026-02-28 00:21:43.730846 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-02-28 00:21:43.730857 | orchestrator | Saturday 28 February 2026 00:21:37 +0000 (0:00:00.064) 0:00:05.128 ***** 2026-02-28 00:21:43.730868 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2026-02-28 00:21:43.730880 | orchestrator | 2026-02-28 00:21:43.730891 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-02-28 00:21:43.730913 | orchestrator | Saturday 28 February 2026 00:21:37 +0000 (0:00:00.089) 0:00:05.218 ***** 2026-02-28 00:21:43.730924 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2026-02-28 00:21:43.730936 | orchestrator | 2026-02-28 00:21:43.730947 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-02-28 00:21:43.730958 | orchestrator | Saturday 28 February 2026 00:21:37 +0000 (0:00:00.085) 0:00:05.303 ***** 2026-02-28 00:21:43.730969 | orchestrator | ok: [testbed-manager] 2026-02-28 00:21:43.730980 | orchestrator | 2026-02-28 00:21:43.730991 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-02-28 00:21:43.731002 | orchestrator | Saturday 28 February 2026 00:21:38 +0000 (0:00:01.134) 0:00:06.437 ***** 2026-02-28 00:21:43.731013 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:21:43.731024 | orchestrator | 2026-02-28 00:21:43.731035 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-02-28 00:21:43.731046 | orchestrator | Saturday 28 February 2026 00:21:39 +0000 (0:00:00.053) 0:00:06.491 ***** 2026-02-28 00:21:43.731057 | orchestrator | ok: [testbed-manager] 2026-02-28 00:21:43.731068 | orchestrator | 2026-02-28 00:21:43.731078 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-02-28 00:21:43.731089 | orchestrator | Saturday 28 February 2026 00:21:39 +0000 (0:00:00.512) 0:00:07.004 ***** 2026-02-28 00:21:43.731102 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:21:43.731115 | orchestrator | 2026-02-28 00:21:43.731128 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-02-28 00:21:43.731141 | orchestrator | Saturday 28 February 2026 00:21:39 +0000 (0:00:00.073) 0:00:07.077 ***** 2026-02-28 00:21:43.731153 | orchestrator | changed: [testbed-manager] 2026-02-28 00:21:43.731166 | orchestrator | 2026-02-28 00:21:43.731178 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-02-28 00:21:43.731191 | orchestrator | Saturday 28 February 2026 00:21:40 +0000 (0:00:00.563) 0:00:07.641 ***** 2026-02-28 00:21:43.731203 | orchestrator | changed: [testbed-manager] 2026-02-28 00:21:43.731216 | orchestrator | 2026-02-28 00:21:43.731252 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-02-28 00:21:43.731266 | orchestrator | Saturday 28 February 2026 00:21:41 +0000 (0:00:01.133) 0:00:08.775 ***** 2026-02-28 00:21:43.731279 | orchestrator | ok: [testbed-manager] 2026-02-28 00:21:43.731291 | orchestrator | 2026-02-28 00:21:43.731303 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-02-28 00:21:43.731316 | orchestrator | Saturday 28 February 2026 00:21:42 +0000 (0:00:00.989) 0:00:09.764 ***** 2026-02-28 00:21:43.731328 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2026-02-28 00:21:43.731341 | orchestrator | 2026-02-28 00:21:43.731352 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-02-28 00:21:43.731365 | orchestrator | Saturday 28 February 2026 00:21:42 +0000 (0:00:00.082) 0:00:09.846 ***** 2026-02-28 00:21:43.731378 | orchestrator | changed: [testbed-manager] 2026-02-28 00:21:43.731391 | orchestrator | 2026-02-28 00:21:43.731403 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-28 00:21:43.731416 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-28 00:21:43.731429 | orchestrator | 2026-02-28 00:21:43.731441 | orchestrator | 2026-02-28 00:21:43.731453 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-28 00:21:43.731464 | orchestrator | Saturday 28 February 2026 00:21:43 +0000 (0:00:01.119) 0:00:10.966 ***** 2026-02-28 00:21:43.731475 | orchestrator | =============================================================================== 2026-02-28 00:21:43.731486 | orchestrator | Gathering Facts --------------------------------------------------------- 4.91s 2026-02-28 00:21:43.731497 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 1.13s 2026-02-28 00:21:43.731508 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 1.13s 2026-02-28 00:21:43.731518 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.12s 2026-02-28 00:21:43.731529 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 0.99s 2026-02-28 00:21:43.731573 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.56s 2026-02-28 00:21:43.731604 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.51s 2026-02-28 00:21:43.731616 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.09s 2026-02-28 00:21:43.731626 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.09s 2026-02-28 00:21:43.731637 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.08s 2026-02-28 00:21:43.731648 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.07s 2026-02-28 00:21:43.731665 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.06s 2026-02-28 00:21:43.731676 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.05s 2026-02-28 00:21:44.063105 | orchestrator | + osism apply sshconfig 2026-02-28 00:21:56.133485 | orchestrator | 2026-02-28 00:21:56 | INFO  | Prepare task for execution of sshconfig. 2026-02-28 00:21:56.203089 | orchestrator | 2026-02-28 00:21:56 | INFO  | Task a7d8ef05-9839-4f19-8088-404113eb287d (sshconfig) was prepared for execution. 2026-02-28 00:21:56.203185 | orchestrator | 2026-02-28 00:21:56 | INFO  | It takes a moment until task a7d8ef05-9839-4f19-8088-404113eb287d (sshconfig) has been started and output is visible here. 2026-02-28 00:22:08.343820 | orchestrator | 2026-02-28 00:22:08.343920 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2026-02-28 00:22:08.343936 | orchestrator | 2026-02-28 00:22:08.343949 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2026-02-28 00:22:08.343961 | orchestrator | Saturday 28 February 2026 00:22:00 +0000 (0:00:00.175) 0:00:00.175 ***** 2026-02-28 00:22:08.344001 | orchestrator | ok: [testbed-manager] 2026-02-28 00:22:08.344014 | orchestrator | 2026-02-28 00:22:08.344026 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2026-02-28 00:22:08.344037 | orchestrator | Saturday 28 February 2026 00:22:01 +0000 (0:00:00.563) 0:00:00.738 ***** 2026-02-28 00:22:08.344048 | orchestrator | changed: [testbed-manager] 2026-02-28 00:22:08.344059 | orchestrator | 2026-02-28 00:22:08.344070 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2026-02-28 00:22:08.344081 | orchestrator | Saturday 28 February 2026 00:22:01 +0000 (0:00:00.528) 0:00:01.267 ***** 2026-02-28 00:22:08.344092 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2026-02-28 00:22:08.344104 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2026-02-28 00:22:08.344115 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2026-02-28 00:22:08.344126 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2026-02-28 00:22:08.344137 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2026-02-28 00:22:08.344147 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2026-02-28 00:22:08.344158 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2026-02-28 00:22:08.344169 | orchestrator | 2026-02-28 00:22:08.344180 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2026-02-28 00:22:08.344191 | orchestrator | Saturday 28 February 2026 00:22:07 +0000 (0:00:05.813) 0:00:07.080 ***** 2026-02-28 00:22:08.344202 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:22:08.344213 | orchestrator | 2026-02-28 00:22:08.344224 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2026-02-28 00:22:08.344236 | orchestrator | Saturday 28 February 2026 00:22:07 +0000 (0:00:00.082) 0:00:07.162 ***** 2026-02-28 00:22:08.344247 | orchestrator | changed: [testbed-manager] 2026-02-28 00:22:08.344258 | orchestrator | 2026-02-28 00:22:08.344269 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-28 00:22:08.344281 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-28 00:22:08.344292 | orchestrator | 2026-02-28 00:22:08.344303 | orchestrator | 2026-02-28 00:22:08.344314 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-28 00:22:08.344326 | orchestrator | Saturday 28 February 2026 00:22:08 +0000 (0:00:00.580) 0:00:07.743 ***** 2026-02-28 00:22:08.344337 | orchestrator | =============================================================================== 2026-02-28 00:22:08.344348 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 5.81s 2026-02-28 00:22:08.344361 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.58s 2026-02-28 00:22:08.344374 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.56s 2026-02-28 00:22:08.344386 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.53s 2026-02-28 00:22:08.344399 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.08s 2026-02-28 00:22:08.661214 | orchestrator | + osism apply known-hosts 2026-02-28 00:22:20.697462 | orchestrator | 2026-02-28 00:22:20 | INFO  | Prepare task for execution of known-hosts. 2026-02-28 00:22:20.779041 | orchestrator | 2026-02-28 00:22:20 | INFO  | Task 1fab8fed-ee35-4287-8d26-b36ef55c764c (known-hosts) was prepared for execution. 2026-02-28 00:22:20.779136 | orchestrator | 2026-02-28 00:22:20 | INFO  | It takes a moment until task 1fab8fed-ee35-4287-8d26-b36ef55c764c (known-hosts) has been started and output is visible here. 2026-02-28 00:22:37.082014 | orchestrator | 2026-02-28 00:22:37.082168 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2026-02-28 00:22:37.082184 | orchestrator | 2026-02-28 00:22:37.082194 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2026-02-28 00:22:37.082230 | orchestrator | Saturday 28 February 2026 00:22:25 +0000 (0:00:00.167) 0:00:00.167 ***** 2026-02-28 00:22:37.082241 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-02-28 00:22:37.082251 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-02-28 00:22:37.082261 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-02-28 00:22:37.082271 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-02-28 00:22:37.082280 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-02-28 00:22:37.082290 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-02-28 00:22:37.082310 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-02-28 00:22:37.082320 | orchestrator | 2026-02-28 00:22:37.082331 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2026-02-28 00:22:37.082342 | orchestrator | Saturday 28 February 2026 00:22:31 +0000 (0:00:05.949) 0:00:06.117 ***** 2026-02-28 00:22:37.082352 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-02-28 00:22:37.082364 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-02-28 00:22:37.082374 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-02-28 00:22:37.082383 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-02-28 00:22:37.082393 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-02-28 00:22:37.082402 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-02-28 00:22:37.082412 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-02-28 00:22:37.082422 | orchestrator | 2026-02-28 00:22:37.082431 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-28 00:22:37.082441 | orchestrator | Saturday 28 February 2026 00:22:31 +0000 (0:00:00.160) 0:00:06.278 ***** 2026-02-28 00:22:37.082452 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPzDK7G2bmZ+dBM1CLLi1QFwUPa7CJPczz2ej2DUbaYRqvR9hyYqM47QfKXMkIx+uOjaGHUHrWscqVlz50kQ2yk=) 2026-02-28 00:22:37.082467 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCohN6a3PnHv9BfDZwwvybdsHztlYcKp3rhEAlN4oeB64aZYNgiFOdf6lz4d/inm8s9w6+GsnX0xIt7uHlUqU1viIZbhBUWH2hnYkvtVeVP113kwEZ17aaViYH0iKXu7HBxlkQyupW5w+qTSHfpuA/eZaI8tjFF0itTd5BcTP3scAbjxllsjBHN0B211TQ0Htl2wq43y8e9odBsD/GJnTdpFv99FGevXW7jvb1cqWRf6MLPHf0vF8VVPgdsZnwrlbeIGAqkv4IZpMmCIUmMuz/nJxZN94rakN6UwAv0grZF6DZI6lbEZ0SPVuq2cjWfvJZethmjzLdzexJztWJIyNkaGVZR0n9SH1oSu9oPjsWOSin8Y5LA9aMK4P5/DLrR4R5DiAsqoDdDY0Lr4FzJGLxZVTrIE1AP6oSSuctcmVwbTX2R9cPGqzt2Cgu7pKOdBdIoJZqhheUMXjt7d9OZHLgeFxNB1EA2Cqt2TzcWSkX9KCEaQRKlp9JE4j+0YcVXbBk=) 2026-02-28 00:22:37.082480 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPcEad0CL8N2nSFMkxqKZoMDJYZVwe37Kwe4yCbTd37w) 2026-02-28 00:22:37.082491 | orchestrator | 2026-02-28 00:22:37.082501 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-28 00:22:37.082511 | orchestrator | Saturday 28 February 2026 00:22:32 +0000 (0:00:01.174) 0:00:07.453 ***** 2026-02-28 00:22:37.082528 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIN/wZo1+IFPA1jUkNwgjAq0X01Vzo77KpQmeU7OP+/zQ) 2026-02-28 00:22:37.082573 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCOZeMf9W4fC9o1YRZtVbayZT4rX1PtpZYwZBTZKQqh3ssnhbVzodbKvjuoc5f3MCb/mqnOW4J1zJflsvM26xFcH1GNegMeEQWTI8zirj9NS18UnA9MheOEACmiMpRf6kP9dJ6ODH9gPlgprqtBmQ0F1YYGSiuCJtSmjRMqboRJIZXuPd6ixK/dVdqt3Qt9ZWB4SzFJZ9eFfFK2JCB7KXQ40sDXMHEFntcm3FSbiQTTbOwhIv3rlz96fkhR9WnyYhgKr+F9j4DmXU7a6f+Bz+fcpBslmiHLmopd2B0cj8rSAuMa9FT3RjHP9+nPQXBvYAuerh9BAVANUONd1CZYlORgIMb7mWjKSDprUkX3qMqBLSRGmJ4CiJHBBXZWhl/7oLliwN9xoLKzwB+8OK0NU1yGT2tH+FrBEde8U4hkp219Fi9kcVdezxaPfNVSfcFl4H0sQPIxTfyyEU5GE0iReR4owdTO2cuoIbEoBTjxleYxXoYKbrZf+vIi7W3TwVtoPec=) 2026-02-28 00:22:37.082586 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBAyg6UeywZoHFXw8Xi7VL37qeSGEruFZWvoqDfGkWi3TmzhXVvRiAnB92lv17sGBvT3G4z3MAW2lvrxPp9RpoiE=) 2026-02-28 00:22:37.082598 | orchestrator | 2026-02-28 00:22:37.082609 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-28 00:22:37.082645 | orchestrator | Saturday 28 February 2026 00:22:33 +0000 (0:00:01.095) 0:00:08.548 ***** 2026-02-28 00:22:37.082658 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKpSTsqCsCrbgdNXuvcer5X/fokXgzsK0lasWfFu8tKUv0Z1v9Xa9IzYb4+xV/SHooZKi/h/c2FlF2W8ahnOL6s=) 2026-02-28 00:22:37.082669 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIP8CXFuTqhNM87fFr3ox0x/Gr1QDLNsq9CGA8zW+wkRV) 2026-02-28 00:22:37.082747 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDQrLAfpv9ILhMd5VxZoR+if7Xii/dyt4hWAk9PhIpc4mAUb+r+oU23Hg3lKEp9G1LK6di3SwhNzZDQzwQ+WpaOVPvU63TBSYidmLQ0oz387Q48pqJsSBn1IR3g0iAgyua6C/kcsrrauQR2T3hZPTcj0Wb1LJlwWx9GVjcVnFnYdb31s58IEHG81q8ZKmddF7g6OdCH93ZAFk7hlu2rQYXv9U7e6AkPd6jyWUPVnAdLxrXMgG0//K5XyL47UhhsMOTl1gQEjImA7QBwO8h+a1uzlHRlogKPxD8DC9jA1+WKY7u/T7ekP6N7hKQ0yJiuTOMCxJe8oWqdMDcDgzT1VhQDEp2JXSJzmN+FL3IDSutatuqlvW/lsQYlI7zAnLUxA5DHWtUD7LcKmCJ2es0rUMoiUUMVc3LichNE7wbPzzUrvjfIvPPPLRrES7CKn7bUHmQebbq75VJ/Em5w2q0eNSSCmHZYEdzUPgiPRjeLF/onJU4nbhvjTg/lFkms2NolAp0=) 2026-02-28 00:22:37.082761 | orchestrator | 2026-02-28 00:22:37.082772 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-28 00:22:37.082789 | orchestrator | Saturday 28 February 2026 00:22:34 +0000 (0:00:01.114) 0:00:09.663 ***** 2026-02-28 00:22:37.082801 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC09h6SLZW+CZ5qcamoJ1qXLCewaZIMx2TVXnRMwCbbVOGpEGaL2XfLSFy6XyI7HwenYPUqZSTfW+XGOJDTFy2THrOZ/qT9qReyWndoe8ZB65cRyF+BpvRCQmlPkbc/0B1hPkfDNRuCTPUIGm2QP7CPPI3SpNrjR9BdCq3vY0fVk2TNHkjOYpE4YIs3R1m0Re39pS65wLCN3+lqWlkD7durK1N/7lQRgZG7Lb6RynbTAyB4xJBXTIdcrEdSklfIUFMNpJsNKkFcsbySQf8++tra/9CArUkPu2JHTwzOaOO/nn8Hm1OF1w8B4fvZ96CxCKaG66YG8Bl4yBlzaK3rhAvfMuXALK0VlJeyb3TLxdbzXu0w4pETT5oLGA/m9nNOPpmOOMi9g6sw/9UQ6gh4qU9eJOgDA6UniNoMCBnzKdMC9w8oictACNZ7TBX6lRqWn/fBaLUFz0xai7VFRK/mKPFm72S+7XB79SrJeKVQr5CRRoTaZ+SLOR+CwLSuUw3JXtc=) 2026-02-28 00:22:37.082813 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGLJ90AyCe1LDA9I443BQ0XbHeGZILo7htBvWEfrguY6kD3bOjXex5Ba+JhfnBrqVe7G/XMV9dhKz5KzuQCudqc=) 2026-02-28 00:22:37.082824 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJ9pZ5xiwJKTEAS5ZZQDKZteeMzGH2yb41u07oXSxC+/) 2026-02-28 00:22:37.082835 | orchestrator | 2026-02-28 00:22:37.082846 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-28 00:22:37.082859 | orchestrator | Saturday 28 February 2026 00:22:35 +0000 (0:00:01.070) 0:00:10.733 ***** 2026-02-28 00:22:37.082870 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPTUwvAaiZrb0dYwj89GnuzJghOJGzPXpCvWC4vqFpwVJsH7x5hnFZJZVAoWAwPejG7C0P0K7/RT1sXDZqqnDXY=) 2026-02-28 00:22:37.082890 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCQQidngRxlBqDGzLmpCg5+2HgqBZNU6tJw00+Zb0bLQtDgAu4CUP8Z4Q/rrfnZK+26l2GdAGFf3J5gPI8Et6jwiWhahZiWrnXQmEiqt2jl5nyD/xPJSRgvoodp767rL7JO/sAv0bNXpVKPjNVshvzWY79s1Fle3nmWlzcx6xGfspF92A6m5hcE0Bxc8QLdTQOF3xYo/ofMpzK0Po5zOhngSQ2bf9jHx1/HE8KGDhxcCg4N4sAHgD+dCKZa3rvm7VACAqOLbYC96XdPEvVs9aDbSqK7y8NBGHtWPnEIEfl8VPVNi33b7+pyxg4BTWtWx+j21bBKW0tB5wprFT0L8Jt3Sr1qmZjcc02kmZuli83bjKU58m+J4UWJwPd6Ur4kvdVr8ne0jZ1nMgQHeoW8Qyhii9vWjZIMl42nvQsVjt0HXJ9c/RdYoDnt7PiiZF8r6w7C4eYGHG7ZUc+Yf1sZ65LqmE1HCRiakPOE7NoBqCTzoOj+wbCTMw+B1pscRNIz/C0=) 2026-02-28 00:22:37.082902 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPD74vpUIWJS2gGy5RKzMVbQiJX5yYUTEDpVubNEtDFQ) 2026-02-28 00:22:37.082912 | orchestrator | 2026-02-28 00:22:37.082922 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-28 00:22:37.082931 | orchestrator | Saturday 28 February 2026 00:22:36 +0000 (0:00:01.061) 0:00:11.795 ***** 2026-02-28 00:22:37.082947 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKRVdv5LndjoDeES45mnt5vBsIE9gRYC0TgyJbFp8jKg6UdtD2T86kHhSacWKB1vrzR45xMDjsTBvVZAa1Qlo8s=) 2026-02-28 00:22:48.294265 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDlUo9Ce+c8lItCHzJvbZxwpIalt7gkjXtU14xCogMRbhFKl5tklghmYbTYIewfaYmIsrgiTOmEaFlz/sqPf06lro8Kjtl/T4AWEl5aacZCHK0ck2KjCLKOInZB1xX1K25nP1XYT3F8/G86MEa+lLW1QysF0cWJzuwqz+9nFsggGcA0SkmQd8zJPPpWbeLHAFN9ONf+6gtFRBvgoM9FfreKOwtqVLk2APYlqJ0eOW87qvAuxHQT487ZJ/ZjkgCql8mcH29flm5fFTVAdKQkf+rBlUQQH6B7Uhg1r/i46iTNdj5yLH/UT5hS2LaoLcvW1zdnP7b+GQgprvx7iW3EgDUhUDEbRaY4Q5VvOa7cK1FMzDV4vq78CViHa1NGrQju8FSNudNvTK2ZGLU7hGeJ5tGYH8yfolHKWIVCIla5BqeHhi0JlFSk+vWNQLLmcICdtquzku70bFBXTk9LcC1r4wWO5xf6rPtUNvIfrD5MnK3EKm/Sj42bsqmCgi4b0Jh0dRc=) 2026-02-28 00:22:48.294364 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBzXLoPEaNorq+a2g7pNKSZK0zfVDO2FZK5INasuSlzC) 2026-02-28 00:22:48.294380 | orchestrator | 2026-02-28 00:22:48.294392 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-28 00:22:48.294402 | orchestrator | Saturday 28 February 2026 00:22:37 +0000 (0:00:01.079) 0:00:12.875 ***** 2026-02-28 00:22:48.294411 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICTuMnFSJfQ2Yr5A0fy1VDKlM4uIy52D45JgbUeuEati) 2026-02-28 00:22:48.294420 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCx/gzWJ2Ncvkhmtnieqeml0JEaNOBULnZRaWBUJo7CQQqdnl3bTo2viPOrXG2ssnkCGl0OTFxJpZir19jEAg8sQG89JmKpIm6UDu0WtfL5F51ZU18rdBt0jmi12RXodDwlu1AtdkuDvbTxzp4zDJXH4bLw7fMMG1NNMuOhNZ4sIaYq4a2oP/JG74v7fOWE9cpepuIKrS0mWWt3LPjcF0iESdA7jE2vQCNcPPatvJS+YAXtxop76dL0q8w5W8GjMnD7HEahdGjTCTJgnJmqzwUvgaY5zV/b8NCqIoK5BuqxdyjkaV+Pe6vGPMqH+aAj2BQLbZbRjEAJRcq+YnLm4I3r7ldTLpLBdstqWqlfYUtXXdNBbfrOixSUlEz0GUTIpcuGpEKO+jcHG+81EtR6DfiHrTGtJydFSe3dSJcXIcRed7VVEQS4WLrX/tJhEhFbay2mb2tVPTeMe+7cdq2+GrdFESuYncvXvCVK0TPrxqZbrX46ppcfUhCXUpliMyr8UJU=) 2026-02-28 00:22:48.294431 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBB0EojAE9+r1jhw1fQWd6kJUfqQgPTbhdagqHq5voVeCFp1Z0NPPohGkYdldp6XJq+jgVnx0zzIxAn0SDEEG5uU=) 2026-02-28 00:22:48.294441 | orchestrator | 2026-02-28 00:22:48.294450 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2026-02-28 00:22:48.294460 | orchestrator | Saturday 28 February 2026 00:22:38 +0000 (0:00:01.038) 0:00:13.914 ***** 2026-02-28 00:22:48.294470 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-02-28 00:22:48.294479 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-02-28 00:22:48.294487 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-02-28 00:22:48.294514 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-02-28 00:22:48.294522 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-02-28 00:22:48.294531 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-02-28 00:22:48.294548 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-02-28 00:22:48.294557 | orchestrator | 2026-02-28 00:22:48.294566 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2026-02-28 00:22:48.294576 | orchestrator | Saturday 28 February 2026 00:22:44 +0000 (0:00:05.397) 0:00:19.311 ***** 2026-02-28 00:22:48.294586 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-02-28 00:22:48.294596 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-02-28 00:22:48.294605 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-02-28 00:22:48.294614 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-02-28 00:22:48.294625 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-02-28 00:22:48.294687 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-02-28 00:22:48.294703 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-02-28 00:22:48.294716 | orchestrator | 2026-02-28 00:22:48.294750 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-28 00:22:48.294766 | orchestrator | Saturday 28 February 2026 00:22:44 +0000 (0:00:00.185) 0:00:19.497 ***** 2026-02-28 00:22:48.294784 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCohN6a3PnHv9BfDZwwvybdsHztlYcKp3rhEAlN4oeB64aZYNgiFOdf6lz4d/inm8s9w6+GsnX0xIt7uHlUqU1viIZbhBUWH2hnYkvtVeVP113kwEZ17aaViYH0iKXu7HBxlkQyupW5w+qTSHfpuA/eZaI8tjFF0itTd5BcTP3scAbjxllsjBHN0B211TQ0Htl2wq43y8e9odBsD/GJnTdpFv99FGevXW7jvb1cqWRf6MLPHf0vF8VVPgdsZnwrlbeIGAqkv4IZpMmCIUmMuz/nJxZN94rakN6UwAv0grZF6DZI6lbEZ0SPVuq2cjWfvJZethmjzLdzexJztWJIyNkaGVZR0n9SH1oSu9oPjsWOSin8Y5LA9aMK4P5/DLrR4R5DiAsqoDdDY0Lr4FzJGLxZVTrIE1AP6oSSuctcmVwbTX2R9cPGqzt2Cgu7pKOdBdIoJZqhheUMXjt7d9OZHLgeFxNB1EA2Cqt2TzcWSkX9KCEaQRKlp9JE4j+0YcVXbBk=) 2026-02-28 00:22:48.294800 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPzDK7G2bmZ+dBM1CLLi1QFwUPa7CJPczz2ej2DUbaYRqvR9hyYqM47QfKXMkIx+uOjaGHUHrWscqVlz50kQ2yk=) 2026-02-28 00:22:48.294813 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPcEad0CL8N2nSFMkxqKZoMDJYZVwe37Kwe4yCbTd37w) 2026-02-28 00:22:48.294822 | orchestrator | 2026-02-28 00:22:48.294831 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-28 00:22:48.294840 | orchestrator | Saturday 28 February 2026 00:22:45 +0000 (0:00:01.088) 0:00:20.585 ***** 2026-02-28 00:22:48.294849 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCOZeMf9W4fC9o1YRZtVbayZT4rX1PtpZYwZBTZKQqh3ssnhbVzodbKvjuoc5f3MCb/mqnOW4J1zJflsvM26xFcH1GNegMeEQWTI8zirj9NS18UnA9MheOEACmiMpRf6kP9dJ6ODH9gPlgprqtBmQ0F1YYGSiuCJtSmjRMqboRJIZXuPd6ixK/dVdqt3Qt9ZWB4SzFJZ9eFfFK2JCB7KXQ40sDXMHEFntcm3FSbiQTTbOwhIv3rlz96fkhR9WnyYhgKr+F9j4DmXU7a6f+Bz+fcpBslmiHLmopd2B0cj8rSAuMa9FT3RjHP9+nPQXBvYAuerh9BAVANUONd1CZYlORgIMb7mWjKSDprUkX3qMqBLSRGmJ4CiJHBBXZWhl/7oLliwN9xoLKzwB+8OK0NU1yGT2tH+FrBEde8U4hkp219Fi9kcVdezxaPfNVSfcFl4H0sQPIxTfyyEU5GE0iReR4owdTO2cuoIbEoBTjxleYxXoYKbrZf+vIi7W3TwVtoPec=) 2026-02-28 00:22:48.294866 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBAyg6UeywZoHFXw8Xi7VL37qeSGEruFZWvoqDfGkWi3TmzhXVvRiAnB92lv17sGBvT3G4z3MAW2lvrxPp9RpoiE=) 2026-02-28 00:22:48.294876 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIN/wZo1+IFPA1jUkNwgjAq0X01Vzo77KpQmeU7OP+/zQ) 2026-02-28 00:22:48.294885 | orchestrator | 2026-02-28 00:22:48.294894 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-28 00:22:48.294903 | orchestrator | Saturday 28 February 2026 00:22:46 +0000 (0:00:01.052) 0:00:21.638 ***** 2026-02-28 00:22:48.294912 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKpSTsqCsCrbgdNXuvcer5X/fokXgzsK0lasWfFu8tKUv0Z1v9Xa9IzYb4+xV/SHooZKi/h/c2FlF2W8ahnOL6s=) 2026-02-28 00:22:48.294921 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDQrLAfpv9ILhMd5VxZoR+if7Xii/dyt4hWAk9PhIpc4mAUb+r+oU23Hg3lKEp9G1LK6di3SwhNzZDQzwQ+WpaOVPvU63TBSYidmLQ0oz387Q48pqJsSBn1IR3g0iAgyua6C/kcsrrauQR2T3hZPTcj0Wb1LJlwWx9GVjcVnFnYdb31s58IEHG81q8ZKmddF7g6OdCH93ZAFk7hlu2rQYXv9U7e6AkPd6jyWUPVnAdLxrXMgG0//K5XyL47UhhsMOTl1gQEjImA7QBwO8h+a1uzlHRlogKPxD8DC9jA1+WKY7u/T7ekP6N7hKQ0yJiuTOMCxJe8oWqdMDcDgzT1VhQDEp2JXSJzmN+FL3IDSutatuqlvW/lsQYlI7zAnLUxA5DHWtUD7LcKmCJ2es0rUMoiUUMVc3LichNE7wbPzzUrvjfIvPPPLRrES7CKn7bUHmQebbq75VJ/Em5w2q0eNSSCmHZYEdzUPgiPRjeLF/onJU4nbhvjTg/lFkms2NolAp0=) 2026-02-28 00:22:48.294930 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIP8CXFuTqhNM87fFr3ox0x/Gr1QDLNsq9CGA8zW+wkRV) 2026-02-28 00:22:48.294939 | orchestrator | 2026-02-28 00:22:48.294948 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-28 00:22:48.294956 | orchestrator | Saturday 28 February 2026 00:22:47 +0000 (0:00:01.031) 0:00:22.669 ***** 2026-02-28 00:22:48.294965 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJ9pZ5xiwJKTEAS5ZZQDKZteeMzGH2yb41u07oXSxC+/) 2026-02-28 00:22:48.294993 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC09h6SLZW+CZ5qcamoJ1qXLCewaZIMx2TVXnRMwCbbVOGpEGaL2XfLSFy6XyI7HwenYPUqZSTfW+XGOJDTFy2THrOZ/qT9qReyWndoe8ZB65cRyF+BpvRCQmlPkbc/0B1hPkfDNRuCTPUIGm2QP7CPPI3SpNrjR9BdCq3vY0fVk2TNHkjOYpE4YIs3R1m0Re39pS65wLCN3+lqWlkD7durK1N/7lQRgZG7Lb6RynbTAyB4xJBXTIdcrEdSklfIUFMNpJsNKkFcsbySQf8++tra/9CArUkPu2JHTwzOaOO/nn8Hm1OF1w8B4fvZ96CxCKaG66YG8Bl4yBlzaK3rhAvfMuXALK0VlJeyb3TLxdbzXu0w4pETT5oLGA/m9nNOPpmOOMi9g6sw/9UQ6gh4qU9eJOgDA6UniNoMCBnzKdMC9w8oictACNZ7TBX6lRqWn/fBaLUFz0xai7VFRK/mKPFm72S+7XB79SrJeKVQr5CRRoTaZ+SLOR+CwLSuUw3JXtc=) 2026-02-28 00:22:53.006709 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGLJ90AyCe1LDA9I443BQ0XbHeGZILo7htBvWEfrguY6kD3bOjXex5Ba+JhfnBrqVe7G/XMV9dhKz5KzuQCudqc=) 2026-02-28 00:22:53.006820 | orchestrator | 2026-02-28 00:22:53.006839 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-28 00:22:53.006852 | orchestrator | Saturday 28 February 2026 00:22:48 +0000 (0:00:01.094) 0:00:23.763 ***** 2026-02-28 00:22:53.006866 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCQQidngRxlBqDGzLmpCg5+2HgqBZNU6tJw00+Zb0bLQtDgAu4CUP8Z4Q/rrfnZK+26l2GdAGFf3J5gPI8Et6jwiWhahZiWrnXQmEiqt2jl5nyD/xPJSRgvoodp767rL7JO/sAv0bNXpVKPjNVshvzWY79s1Fle3nmWlzcx6xGfspF92A6m5hcE0Bxc8QLdTQOF3xYo/ofMpzK0Po5zOhngSQ2bf9jHx1/HE8KGDhxcCg4N4sAHgD+dCKZa3rvm7VACAqOLbYC96XdPEvVs9aDbSqK7y8NBGHtWPnEIEfl8VPVNi33b7+pyxg4BTWtWx+j21bBKW0tB5wprFT0L8Jt3Sr1qmZjcc02kmZuli83bjKU58m+J4UWJwPd6Ur4kvdVr8ne0jZ1nMgQHeoW8Qyhii9vWjZIMl42nvQsVjt0HXJ9c/RdYoDnt7PiiZF8r6w7C4eYGHG7ZUc+Yf1sZ65LqmE1HCRiakPOE7NoBqCTzoOj+wbCTMw+B1pscRNIz/C0=) 2026-02-28 00:22:53.006902 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPTUwvAaiZrb0dYwj89GnuzJghOJGzPXpCvWC4vqFpwVJsH7x5hnFZJZVAoWAwPejG7C0P0K7/RT1sXDZqqnDXY=) 2026-02-28 00:22:53.006915 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPD74vpUIWJS2gGy5RKzMVbQiJX5yYUTEDpVubNEtDFQ) 2026-02-28 00:22:53.006927 | orchestrator | 2026-02-28 00:22:53.006955 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-28 00:22:53.006967 | orchestrator | Saturday 28 February 2026 00:22:49 +0000 (0:00:01.023) 0:00:24.787 ***** 2026-02-28 00:22:53.006978 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKRVdv5LndjoDeES45mnt5vBsIE9gRYC0TgyJbFp8jKg6UdtD2T86kHhSacWKB1vrzR45xMDjsTBvVZAa1Qlo8s=) 2026-02-28 00:22:53.006991 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDlUo9Ce+c8lItCHzJvbZxwpIalt7gkjXtU14xCogMRbhFKl5tklghmYbTYIewfaYmIsrgiTOmEaFlz/sqPf06lro8Kjtl/T4AWEl5aacZCHK0ck2KjCLKOInZB1xX1K25nP1XYT3F8/G86MEa+lLW1QysF0cWJzuwqz+9nFsggGcA0SkmQd8zJPPpWbeLHAFN9ONf+6gtFRBvgoM9FfreKOwtqVLk2APYlqJ0eOW87qvAuxHQT487ZJ/ZjkgCql8mcH29flm5fFTVAdKQkf+rBlUQQH6B7Uhg1r/i46iTNdj5yLH/UT5hS2LaoLcvW1zdnP7b+GQgprvx7iW3EgDUhUDEbRaY4Q5VvOa7cK1FMzDV4vq78CViHa1NGrQju8FSNudNvTK2ZGLU7hGeJ5tGYH8yfolHKWIVCIla5BqeHhi0JlFSk+vWNQLLmcICdtquzku70bFBXTk9LcC1r4wWO5xf6rPtUNvIfrD5MnK3EKm/Sj42bsqmCgi4b0Jh0dRc=) 2026-02-28 00:22:53.007003 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBzXLoPEaNorq+a2g7pNKSZK0zfVDO2FZK5INasuSlzC) 2026-02-28 00:22:53.007014 | orchestrator | 2026-02-28 00:22:53.007025 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-28 00:22:53.007036 | orchestrator | Saturday 28 February 2026 00:22:50 +0000 (0:00:01.040) 0:00:25.828 ***** 2026-02-28 00:22:53.007047 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICTuMnFSJfQ2Yr5A0fy1VDKlM4uIy52D45JgbUeuEati) 2026-02-28 00:22:53.007059 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCx/gzWJ2Ncvkhmtnieqeml0JEaNOBULnZRaWBUJo7CQQqdnl3bTo2viPOrXG2ssnkCGl0OTFxJpZir19jEAg8sQG89JmKpIm6UDu0WtfL5F51ZU18rdBt0jmi12RXodDwlu1AtdkuDvbTxzp4zDJXH4bLw7fMMG1NNMuOhNZ4sIaYq4a2oP/JG74v7fOWE9cpepuIKrS0mWWt3LPjcF0iESdA7jE2vQCNcPPatvJS+YAXtxop76dL0q8w5W8GjMnD7HEahdGjTCTJgnJmqzwUvgaY5zV/b8NCqIoK5BuqxdyjkaV+Pe6vGPMqH+aAj2BQLbZbRjEAJRcq+YnLm4I3r7ldTLpLBdstqWqlfYUtXXdNBbfrOixSUlEz0GUTIpcuGpEKO+jcHG+81EtR6DfiHrTGtJydFSe3dSJcXIcRed7VVEQS4WLrX/tJhEhFbay2mb2tVPTeMe+7cdq2+GrdFESuYncvXvCVK0TPrxqZbrX46ppcfUhCXUpliMyr8UJU=) 2026-02-28 00:22:53.007070 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBB0EojAE9+r1jhw1fQWd6kJUfqQgPTbhdagqHq5voVeCFp1Z0NPPohGkYdldp6XJq+jgVnx0zzIxAn0SDEEG5uU=) 2026-02-28 00:22:53.007082 | orchestrator | 2026-02-28 00:22:53.007093 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2026-02-28 00:22:53.007104 | orchestrator | Saturday 28 February 2026 00:22:51 +0000 (0:00:01.040) 0:00:26.868 ***** 2026-02-28 00:22:53.007115 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-02-28 00:22:53.007127 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-02-28 00:22:53.007138 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-02-28 00:22:53.007149 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-02-28 00:22:53.007177 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-02-28 00:22:53.007189 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-02-28 00:22:53.007200 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-02-28 00:22:53.007226 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:22:53.007240 | orchestrator | 2026-02-28 00:22:53.007253 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2026-02-28 00:22:53.007266 | orchestrator | Saturday 28 February 2026 00:22:51 +0000 (0:00:00.161) 0:00:27.030 ***** 2026-02-28 00:22:53.007279 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:22:53.007292 | orchestrator | 2026-02-28 00:22:53.007304 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2026-02-28 00:22:53.007317 | orchestrator | Saturday 28 February 2026 00:22:51 +0000 (0:00:00.063) 0:00:27.093 ***** 2026-02-28 00:22:53.007330 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:22:53.007343 | orchestrator | 2026-02-28 00:22:53.007356 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2026-02-28 00:22:53.007368 | orchestrator | Saturday 28 February 2026 00:22:52 +0000 (0:00:00.053) 0:00:27.147 ***** 2026-02-28 00:22:53.007380 | orchestrator | changed: [testbed-manager] 2026-02-28 00:22:53.007393 | orchestrator | 2026-02-28 00:22:53.007407 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-28 00:22:53.007420 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-28 00:22:53.007434 | orchestrator | 2026-02-28 00:22:53.007446 | orchestrator | 2026-02-28 00:22:53.007459 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-28 00:22:53.007472 | orchestrator | Saturday 28 February 2026 00:22:52 +0000 (0:00:00.745) 0:00:27.892 ***** 2026-02-28 00:22:53.007484 | orchestrator | =============================================================================== 2026-02-28 00:22:53.007497 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 5.95s 2026-02-28 00:22:53.007511 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.40s 2026-02-28 00:22:53.007525 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.18s 2026-02-28 00:22:53.007538 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.11s 2026-02-28 00:22:53.007550 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.10s 2026-02-28 00:22:53.007564 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.09s 2026-02-28 00:22:53.007576 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.09s 2026-02-28 00:22:53.007589 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.08s 2026-02-28 00:22:53.007602 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.07s 2026-02-28 00:22:53.007613 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.06s 2026-02-28 00:22:53.007624 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.05s 2026-02-28 00:22:53.007635 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2026-02-28 00:22:53.007664 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2026-02-28 00:22:53.007683 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2026-02-28 00:22:53.007695 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.03s 2026-02-28 00:22:53.007706 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.02s 2026-02-28 00:22:53.007717 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.75s 2026-02-28 00:22:53.007728 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.18s 2026-02-28 00:22:53.007739 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.16s 2026-02-28 00:22:53.007750 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.16s 2026-02-28 00:22:53.305363 | orchestrator | + osism apply squid 2026-02-28 00:23:05.361148 | orchestrator | 2026-02-28 00:23:05 | INFO  | Prepare task for execution of squid. 2026-02-28 00:23:05.435508 | orchestrator | 2026-02-28 00:23:05 | INFO  | Task d95a4285-9cff-42f3-9c17-449b94f362d8 (squid) was prepared for execution. 2026-02-28 00:23:05.435609 | orchestrator | 2026-02-28 00:23:05 | INFO  | It takes a moment until task d95a4285-9cff-42f3-9c17-449b94f362d8 (squid) has been started and output is visible here. 2026-02-28 00:25:11.631261 | orchestrator | 2026-02-28 00:25:11.631343 | orchestrator | PLAY [Apply role squid] ******************************************************** 2026-02-28 00:25:11.631352 | orchestrator | 2026-02-28 00:25:11.631358 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2026-02-28 00:25:11.631365 | orchestrator | Saturday 28 February 2026 00:23:09 +0000 (0:00:00.164) 0:00:00.164 ***** 2026-02-28 00:25:11.631371 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2026-02-28 00:25:11.631378 | orchestrator | 2026-02-28 00:25:11.631383 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2026-02-28 00:25:11.631389 | orchestrator | Saturday 28 February 2026 00:23:09 +0000 (0:00:00.097) 0:00:00.262 ***** 2026-02-28 00:25:11.631395 | orchestrator | ok: [testbed-manager] 2026-02-28 00:25:11.631402 | orchestrator | 2026-02-28 00:25:11.631407 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2026-02-28 00:25:11.631413 | orchestrator | Saturday 28 February 2026 00:23:11 +0000 (0:00:01.484) 0:00:01.747 ***** 2026-02-28 00:25:11.631419 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2026-02-28 00:25:11.631424 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2026-02-28 00:25:11.631430 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2026-02-28 00:25:11.631436 | orchestrator | 2026-02-28 00:25:11.631441 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2026-02-28 00:25:11.631447 | orchestrator | Saturday 28 February 2026 00:23:12 +0000 (0:00:01.176) 0:00:02.923 ***** 2026-02-28 00:25:11.631452 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2026-02-28 00:25:11.631458 | orchestrator | 2026-02-28 00:25:11.631464 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2026-02-28 00:25:11.631469 | orchestrator | Saturday 28 February 2026 00:23:13 +0000 (0:00:01.098) 0:00:04.022 ***** 2026-02-28 00:25:11.631474 | orchestrator | ok: [testbed-manager] 2026-02-28 00:25:11.631480 | orchestrator | 2026-02-28 00:25:11.631485 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2026-02-28 00:25:11.631491 | orchestrator | Saturday 28 February 2026 00:23:13 +0000 (0:00:00.397) 0:00:04.419 ***** 2026-02-28 00:25:11.631496 | orchestrator | changed: [testbed-manager] 2026-02-28 00:25:11.631502 | orchestrator | 2026-02-28 00:25:11.631508 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2026-02-28 00:25:11.631513 | orchestrator | Saturday 28 February 2026 00:23:14 +0000 (0:00:00.901) 0:00:05.321 ***** 2026-02-28 00:25:11.631519 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2026-02-28 00:25:11.631525 | orchestrator | ok: [testbed-manager] 2026-02-28 00:25:11.631531 | orchestrator | 2026-02-28 00:25:11.631536 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2026-02-28 00:25:11.631542 | orchestrator | Saturday 28 February 2026 00:23:51 +0000 (0:00:36.246) 0:00:41.567 ***** 2026-02-28 00:25:11.631547 | orchestrator | changed: [testbed-manager] 2026-02-28 00:25:11.631553 | orchestrator | 2026-02-28 00:25:11.631572 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2026-02-28 00:25:11.631578 | orchestrator | Saturday 28 February 2026 00:24:10 +0000 (0:00:19.494) 0:01:01.062 ***** 2026-02-28 00:25:11.631584 | orchestrator | Pausing for 60 seconds 2026-02-28 00:25:11.631590 | orchestrator | changed: [testbed-manager] 2026-02-28 00:25:11.631596 | orchestrator | 2026-02-28 00:25:11.631601 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2026-02-28 00:25:11.631626 | orchestrator | Saturday 28 February 2026 00:25:10 +0000 (0:01:00.089) 0:02:01.152 ***** 2026-02-28 00:25:11.631632 | orchestrator | ok: [testbed-manager] 2026-02-28 00:25:11.631638 | orchestrator | 2026-02-28 00:25:11.631643 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2026-02-28 00:25:11.631649 | orchestrator | Saturday 28 February 2026 00:25:10 +0000 (0:00:00.081) 0:02:01.233 ***** 2026-02-28 00:25:11.631654 | orchestrator | changed: [testbed-manager] 2026-02-28 00:25:11.631660 | orchestrator | 2026-02-28 00:25:11.631666 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-28 00:25:11.631671 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-28 00:25:11.631677 | orchestrator | 2026-02-28 00:25:11.631683 | orchestrator | 2026-02-28 00:25:11.631688 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-28 00:25:11.631694 | orchestrator | Saturday 28 February 2026 00:25:11 +0000 (0:00:00.625) 0:02:01.858 ***** 2026-02-28 00:25:11.631699 | orchestrator | =============================================================================== 2026-02-28 00:25:11.631705 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.09s 2026-02-28 00:25:11.631710 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 36.25s 2026-02-28 00:25:11.631716 | orchestrator | osism.services.squid : Restart squid service --------------------------- 19.49s 2026-02-28 00:25:11.631721 | orchestrator | osism.services.squid : Install required packages ------------------------ 1.48s 2026-02-28 00:25:11.631727 | orchestrator | osism.services.squid : Create required directories ---------------------- 1.18s 2026-02-28 00:25:11.631732 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 1.10s 2026-02-28 00:25:11.631738 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 0.90s 2026-02-28 00:25:11.631743 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.63s 2026-02-28 00:25:11.631749 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.40s 2026-02-28 00:25:11.631754 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.10s 2026-02-28 00:25:11.631759 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.08s 2026-02-28 00:25:11.960992 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-02-28 00:25:11.961099 | orchestrator | + /opt/configuration/scripts/set-kolla-namespace.sh kolla 2026-02-28 00:25:11.964994 | orchestrator | + set -e 2026-02-28 00:25:11.965025 | orchestrator | + NAMESPACE=kolla 2026-02-28 00:25:11.965037 | orchestrator | + sed -i 's#docker_namespace: .*#docker_namespace: kolla#g' /opt/configuration/inventory/group_vars/all/kolla.yml 2026-02-28 00:25:11.970711 | orchestrator | ++ semver latest 9.0.0 2026-02-28 00:25:12.013362 | orchestrator | + [[ -1 -lt 0 ]] 2026-02-28 00:25:12.013449 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-02-28 00:25:12.014203 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2026-02-28 00:25:24.111329 | orchestrator | 2026-02-28 00:25:24 | INFO  | Prepare task for execution of operator. 2026-02-28 00:25:24.187999 | orchestrator | 2026-02-28 00:25:24 | INFO  | Task 4744ea89-f7f9-4cb8-a196-74413092259c (operator) was prepared for execution. 2026-02-28 00:25:24.188085 | orchestrator | 2026-02-28 00:25:24 | INFO  | It takes a moment until task 4744ea89-f7f9-4cb8-a196-74413092259c (operator) has been started and output is visible here. 2026-02-28 00:25:41.412593 | orchestrator | 2026-02-28 00:25:41.412730 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2026-02-28 00:25:41.412758 | orchestrator | 2026-02-28 00:25:41.412778 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-28 00:25:41.412798 | orchestrator | Saturday 28 February 2026 00:25:28 +0000 (0:00:00.151) 0:00:00.151 ***** 2026-02-28 00:25:41.412883 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:25:41.412905 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:25:41.412923 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:25:41.412985 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:25:41.413009 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:25:41.413029 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:25:41.413049 | orchestrator | 2026-02-28 00:25:41.413070 | orchestrator | TASK [Do not require tty for all users] **************************************** 2026-02-28 00:25:41.413090 | orchestrator | Saturday 28 February 2026 00:25:32 +0000 (0:00:04.358) 0:00:04.510 ***** 2026-02-28 00:25:41.413111 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:25:41.413156 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:25:41.413179 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:25:41.413219 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:25:41.413241 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:25:41.413261 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:25:41.413282 | orchestrator | 2026-02-28 00:25:41.413304 | orchestrator | PLAY [Apply role operator] ***************************************************** 2026-02-28 00:25:41.413327 | orchestrator | 2026-02-28 00:25:41.413349 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-02-28 00:25:41.413372 | orchestrator | Saturday 28 February 2026 00:25:33 +0000 (0:00:00.768) 0:00:05.278 ***** 2026-02-28 00:25:41.413394 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:25:41.413415 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:25:41.413437 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:25:41.413460 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:25:41.413480 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:25:41.413500 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:25:41.413519 | orchestrator | 2026-02-28 00:25:41.413538 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-02-28 00:25:41.413556 | orchestrator | Saturday 28 February 2026 00:25:33 +0000 (0:00:00.180) 0:00:05.459 ***** 2026-02-28 00:25:41.413575 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:25:41.413592 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:25:41.413609 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:25:41.413626 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:25:41.413667 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:25:41.413685 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:25:41.413702 | orchestrator | 2026-02-28 00:25:41.413719 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-02-28 00:25:41.413738 | orchestrator | Saturday 28 February 2026 00:25:33 +0000 (0:00:00.179) 0:00:05.638 ***** 2026-02-28 00:25:41.413757 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:25:41.413775 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:25:41.413793 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:25:41.413913 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:25:41.413937 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:25:41.413954 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:25:41.413970 | orchestrator | 2026-02-28 00:25:41.413988 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-02-28 00:25:41.414007 | orchestrator | Saturday 28 February 2026 00:25:34 +0000 (0:00:00.660) 0:00:06.299 ***** 2026-02-28 00:25:41.414107 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:25:41.414120 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:25:41.414131 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:25:41.414142 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:25:41.414153 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:25:41.414164 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:25:41.414176 | orchestrator | 2026-02-28 00:25:41.414187 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-02-28 00:25:41.414198 | orchestrator | Saturday 28 February 2026 00:25:35 +0000 (0:00:00.853) 0:00:07.152 ***** 2026-02-28 00:25:41.414209 | orchestrator | changed: [testbed-node-0] => (item=adm) 2026-02-28 00:25:41.414221 | orchestrator | changed: [testbed-node-1] => (item=adm) 2026-02-28 00:25:41.414232 | orchestrator | changed: [testbed-node-2] => (item=adm) 2026-02-28 00:25:41.414243 | orchestrator | changed: [testbed-node-3] => (item=adm) 2026-02-28 00:25:41.414254 | orchestrator | changed: [testbed-node-4] => (item=adm) 2026-02-28 00:25:41.414282 | orchestrator | changed: [testbed-node-5] => (item=adm) 2026-02-28 00:25:41.414293 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2026-02-28 00:25:41.414304 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2026-02-28 00:25:41.414315 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2026-02-28 00:25:41.414326 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2026-02-28 00:25:41.414337 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2026-02-28 00:25:41.414348 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2026-02-28 00:25:41.414359 | orchestrator | 2026-02-28 00:25:41.414370 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-02-28 00:25:41.414381 | orchestrator | Saturday 28 February 2026 00:25:36 +0000 (0:00:01.188) 0:00:08.341 ***** 2026-02-28 00:25:41.414392 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:25:41.414403 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:25:41.414414 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:25:41.414424 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:25:41.414435 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:25:41.414446 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:25:41.414457 | orchestrator | 2026-02-28 00:25:41.414468 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-02-28 00:25:41.414480 | orchestrator | Saturday 28 February 2026 00:25:37 +0000 (0:00:01.220) 0:00:09.561 ***** 2026-02-28 00:25:41.414491 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2026-02-28 00:25:41.414502 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2026-02-28 00:25:41.414514 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2026-02-28 00:25:41.414525 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2026-02-28 00:25:41.414536 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2026-02-28 00:25:41.414572 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2026-02-28 00:25:41.414584 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2026-02-28 00:25:41.414595 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2026-02-28 00:25:41.414606 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2026-02-28 00:25:41.414616 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2026-02-28 00:25:41.414628 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2026-02-28 00:25:41.414639 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2026-02-28 00:25:41.414649 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2026-02-28 00:25:41.414660 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2026-02-28 00:25:41.414672 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2026-02-28 00:25:41.414683 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2026-02-28 00:25:41.414694 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2026-02-28 00:25:41.414705 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2026-02-28 00:25:41.414716 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2026-02-28 00:25:41.414727 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2026-02-28 00:25:41.414738 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2026-02-28 00:25:41.414749 | orchestrator | 2026-02-28 00:25:41.414760 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-02-28 00:25:41.414772 | orchestrator | Saturday 28 February 2026 00:25:39 +0000 (0:00:01.357) 0:00:10.919 ***** 2026-02-28 00:25:41.414783 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:25:41.414794 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:25:41.414843 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:25:41.414874 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:25:41.414886 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:25:41.414897 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:25:41.414908 | orchestrator | 2026-02-28 00:25:41.414919 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-02-28 00:25:41.414930 | orchestrator | Saturday 28 February 2026 00:25:39 +0000 (0:00:00.156) 0:00:11.076 ***** 2026-02-28 00:25:41.414941 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:25:41.414952 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:25:41.414963 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:25:41.414973 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:25:41.414984 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:25:41.414995 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:25:41.415006 | orchestrator | 2026-02-28 00:25:41.415017 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-02-28 00:25:41.415028 | orchestrator | Saturday 28 February 2026 00:25:39 +0000 (0:00:00.198) 0:00:11.274 ***** 2026-02-28 00:25:41.415039 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:25:41.415050 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:25:41.415060 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:25:41.415071 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:25:41.415082 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:25:41.415093 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:25:41.415103 | orchestrator | 2026-02-28 00:25:41.415114 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-02-28 00:25:41.415125 | orchestrator | Saturday 28 February 2026 00:25:40 +0000 (0:00:00.601) 0:00:11.876 ***** 2026-02-28 00:25:41.415136 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:25:41.415147 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:25:41.415158 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:25:41.415169 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:25:41.415180 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:25:41.415191 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:25:41.415202 | orchestrator | 2026-02-28 00:25:41.415213 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-02-28 00:25:41.415224 | orchestrator | Saturday 28 February 2026 00:25:40 +0000 (0:00:00.172) 0:00:12.048 ***** 2026-02-28 00:25:41.415235 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-02-28 00:25:41.415246 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:25:41.415256 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-28 00:25:41.415267 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:25:41.415278 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-02-28 00:25:41.415289 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-02-28 00:25:41.415300 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:25:41.415311 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-02-28 00:25:41.415322 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:25:41.415333 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:25:41.415343 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-02-28 00:25:41.415354 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:25:41.415365 | orchestrator | 2026-02-28 00:25:41.415376 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-02-28 00:25:41.415387 | orchestrator | Saturday 28 February 2026 00:25:41 +0000 (0:00:00.727) 0:00:12.776 ***** 2026-02-28 00:25:41.415398 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:25:41.415409 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:25:41.415420 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:25:41.415431 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:25:41.415441 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:25:41.415452 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:25:41.415463 | orchestrator | 2026-02-28 00:25:41.415474 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-02-28 00:25:41.415485 | orchestrator | Saturday 28 February 2026 00:25:41 +0000 (0:00:00.175) 0:00:12.951 ***** 2026-02-28 00:25:41.415581 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:25:41.415596 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:25:41.415607 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:25:41.415618 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:25:41.415638 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:25:42.784302 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:25:42.784416 | orchestrator | 2026-02-28 00:25:42.784434 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-02-28 00:25:42.784447 | orchestrator | Saturday 28 February 2026 00:25:41 +0000 (0:00:00.157) 0:00:13.109 ***** 2026-02-28 00:25:42.784459 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:25:42.784470 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:25:42.784481 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:25:42.784493 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:25:42.784504 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:25:42.784515 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:25:42.784526 | orchestrator | 2026-02-28 00:25:42.784537 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-02-28 00:25:42.784548 | orchestrator | Saturday 28 February 2026 00:25:41 +0000 (0:00:00.175) 0:00:13.285 ***** 2026-02-28 00:25:42.784560 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:25:42.784570 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:25:42.784581 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:25:42.784592 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:25:42.784603 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:25:42.784614 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:25:42.784625 | orchestrator | 2026-02-28 00:25:42.784636 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-02-28 00:25:42.784647 | orchestrator | Saturday 28 February 2026 00:25:42 +0000 (0:00:00.686) 0:00:13.971 ***** 2026-02-28 00:25:42.784658 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:25:42.784669 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:25:42.784680 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:25:42.784691 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:25:42.784702 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:25:42.784713 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:25:42.784724 | orchestrator | 2026-02-28 00:25:42.784735 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-28 00:25:42.784748 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-28 00:25:42.784782 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-28 00:25:42.784794 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-28 00:25:42.784806 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-28 00:25:42.784911 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-28 00:25:42.784926 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-28 00:25:42.784939 | orchestrator | 2026-02-28 00:25:42.784952 | orchestrator | 2026-02-28 00:25:42.784965 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-28 00:25:42.784977 | orchestrator | Saturday 28 February 2026 00:25:42 +0000 (0:00:00.248) 0:00:14.220 ***** 2026-02-28 00:25:42.784990 | orchestrator | =============================================================================== 2026-02-28 00:25:42.785024 | orchestrator | Gathering Facts --------------------------------------------------------- 4.36s 2026-02-28 00:25:42.785038 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.36s 2026-02-28 00:25:42.785051 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.22s 2026-02-28 00:25:42.785063 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.19s 2026-02-28 00:25:42.785076 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.85s 2026-02-28 00:25:42.785089 | orchestrator | Do not require tty for all users ---------------------------------------- 0.77s 2026-02-28 00:25:42.785101 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.73s 2026-02-28 00:25:42.785113 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.69s 2026-02-28 00:25:42.785126 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.66s 2026-02-28 00:25:42.785139 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.60s 2026-02-28 00:25:42.785151 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.25s 2026-02-28 00:25:42.785164 | orchestrator | osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file --- 0.20s 2026-02-28 00:25:42.785177 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.18s 2026-02-28 00:25:42.785189 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.18s 2026-02-28 00:25:42.785201 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.18s 2026-02-28 00:25:42.785212 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.18s 2026-02-28 00:25:42.785223 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.17s 2026-02-28 00:25:42.785234 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.16s 2026-02-28 00:25:42.785244 | orchestrator | osism.commons.operator : Set custom environment variables in .bashrc configuration file --- 0.16s 2026-02-28 00:25:43.097693 | orchestrator | + osism apply --environment custom facts 2026-02-28 00:25:45.099091 | orchestrator | 2026-02-28 00:25:45 | INFO  | Trying to run play facts in environment custom 2026-02-28 00:25:55.118123 | orchestrator | 2026-02-28 00:25:55 | INFO  | Prepare task for execution of facts. 2026-02-28 00:25:55.205988 | orchestrator | 2026-02-28 00:25:55 | INFO  | Task e6005881-678c-4499-8b8f-3b830bcaa55d (facts) was prepared for execution. 2026-02-28 00:25:55.206135 | orchestrator | 2026-02-28 00:25:55 | INFO  | It takes a moment until task e6005881-678c-4499-8b8f-3b830bcaa55d (facts) has been started and output is visible here. 2026-02-28 00:26:40.454213 | orchestrator | 2026-02-28 00:26:40.454325 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2026-02-28 00:26:40.454340 | orchestrator | 2026-02-28 00:26:40.454352 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-02-28 00:26:40.454362 | orchestrator | Saturday 28 February 2026 00:25:59 +0000 (0:00:00.070) 0:00:00.070 ***** 2026-02-28 00:26:40.454372 | orchestrator | ok: [testbed-manager] 2026-02-28 00:26:40.454383 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:26:40.454394 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:26:40.454404 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:26:40.454413 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:26:40.454440 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:26:40.454450 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:26:40.454460 | orchestrator | 2026-02-28 00:26:40.454470 | orchestrator | TASK [Copy fact file] ********************************************************** 2026-02-28 00:26:40.454538 | orchestrator | Saturday 28 February 2026 00:26:00 +0000 (0:00:01.391) 0:00:01.462 ***** 2026-02-28 00:26:40.454551 | orchestrator | ok: [testbed-manager] 2026-02-28 00:26:40.454562 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:26:40.454572 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:26:40.454608 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:26:40.454631 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:26:40.454641 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:26:40.454651 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:26:40.454661 | orchestrator | 2026-02-28 00:26:40.454671 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2026-02-28 00:26:40.454681 | orchestrator | 2026-02-28 00:26:40.454690 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-02-28 00:26:40.454700 | orchestrator | Saturday 28 February 2026 00:26:02 +0000 (0:00:01.220) 0:00:02.682 ***** 2026-02-28 00:26:40.454710 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:26:40.454720 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:26:40.454729 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:26:40.454739 | orchestrator | 2026-02-28 00:26:40.454750 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-02-28 00:26:40.454762 | orchestrator | Saturday 28 February 2026 00:26:02 +0000 (0:00:00.128) 0:00:02.811 ***** 2026-02-28 00:26:40.454773 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:26:40.454784 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:26:40.454795 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:26:40.454806 | orchestrator | 2026-02-28 00:26:40.454817 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-02-28 00:26:40.454828 | orchestrator | Saturday 28 February 2026 00:26:02 +0000 (0:00:00.203) 0:00:03.014 ***** 2026-02-28 00:26:40.454839 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:26:40.454850 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:26:40.454926 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:26:40.454937 | orchestrator | 2026-02-28 00:26:40.454949 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-02-28 00:26:40.454960 | orchestrator | Saturday 28 February 2026 00:26:02 +0000 (0:00:00.220) 0:00:03.235 ***** 2026-02-28 00:26:40.454973 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-28 00:26:40.454986 | orchestrator | 2026-02-28 00:26:40.454997 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-02-28 00:26:40.455009 | orchestrator | Saturday 28 February 2026 00:26:02 +0000 (0:00:00.140) 0:00:03.375 ***** 2026-02-28 00:26:40.455021 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:26:40.455032 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:26:40.455043 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:26:40.455054 | orchestrator | 2026-02-28 00:26:40.455065 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-02-28 00:26:40.455076 | orchestrator | Saturday 28 February 2026 00:26:03 +0000 (0:00:00.460) 0:00:03.835 ***** 2026-02-28 00:26:40.455088 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:26:40.455099 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:26:40.455110 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:26:40.455121 | orchestrator | 2026-02-28 00:26:40.455132 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-02-28 00:26:40.455142 | orchestrator | Saturday 28 February 2026 00:26:03 +0000 (0:00:00.147) 0:00:03.983 ***** 2026-02-28 00:26:40.455151 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:26:40.455161 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:26:40.455171 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:26:40.455180 | orchestrator | 2026-02-28 00:26:40.455190 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-02-28 00:26:40.455200 | orchestrator | Saturday 28 February 2026 00:26:04 +0000 (0:00:01.069) 0:00:05.052 ***** 2026-02-28 00:26:40.455210 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:26:40.455220 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:26:40.455229 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:26:40.455239 | orchestrator | 2026-02-28 00:26:40.455249 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-02-28 00:26:40.455269 | orchestrator | Saturday 28 February 2026 00:26:04 +0000 (0:00:00.478) 0:00:05.530 ***** 2026-02-28 00:26:40.455279 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:26:40.455288 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:26:40.455298 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:26:40.455308 | orchestrator | 2026-02-28 00:26:40.455317 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-02-28 00:26:40.455337 | orchestrator | Saturday 28 February 2026 00:26:05 +0000 (0:00:01.120) 0:00:06.651 ***** 2026-02-28 00:26:40.455347 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:26:40.455357 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:26:40.455366 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:26:40.455376 | orchestrator | 2026-02-28 00:26:40.455385 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2026-02-28 00:26:40.455431 | orchestrator | Saturday 28 February 2026 00:26:22 +0000 (0:00:16.436) 0:00:23.087 ***** 2026-02-28 00:26:40.455441 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:26:40.455451 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:26:40.455461 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:26:40.455532 | orchestrator | 2026-02-28 00:26:40.455543 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2026-02-28 00:26:40.455572 | orchestrator | Saturday 28 February 2026 00:26:22 +0000 (0:00:00.096) 0:00:23.183 ***** 2026-02-28 00:26:40.455582 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:26:40.455592 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:26:40.455602 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:26:40.455612 | orchestrator | 2026-02-28 00:26:40.455622 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-02-28 00:26:40.455631 | orchestrator | Saturday 28 February 2026 00:26:30 +0000 (0:00:08.329) 0:00:31.513 ***** 2026-02-28 00:26:40.455641 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:26:40.455807 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:26:40.455821 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:26:40.455831 | orchestrator | 2026-02-28 00:26:40.455840 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-02-28 00:26:40.455850 | orchestrator | Saturday 28 February 2026 00:26:31 +0000 (0:00:00.485) 0:00:31.998 ***** 2026-02-28 00:26:40.455921 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2026-02-28 00:26:40.455940 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2026-02-28 00:26:40.455956 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2026-02-28 00:26:40.455967 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2026-02-28 00:26:40.455977 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2026-02-28 00:26:40.455987 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2026-02-28 00:26:40.455997 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2026-02-28 00:26:40.456006 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2026-02-28 00:26:40.456016 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2026-02-28 00:26:40.456026 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2026-02-28 00:26:40.456035 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2026-02-28 00:26:40.456045 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2026-02-28 00:26:40.456054 | orchestrator | 2026-02-28 00:26:40.456064 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-02-28 00:26:40.456074 | orchestrator | Saturday 28 February 2026 00:26:35 +0000 (0:00:03.754) 0:00:35.752 ***** 2026-02-28 00:26:40.456083 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:26:40.456093 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:26:40.456103 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:26:40.456112 | orchestrator | 2026-02-28 00:26:40.456122 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-02-28 00:26:40.456142 | orchestrator | 2026-02-28 00:26:40.456152 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-02-28 00:26:40.456161 | orchestrator | Saturday 28 February 2026 00:26:36 +0000 (0:00:01.448) 0:00:37.200 ***** 2026-02-28 00:26:40.456171 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:26:40.456181 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:26:40.456190 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:26:40.456200 | orchestrator | ok: [testbed-manager] 2026-02-28 00:26:40.456209 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:26:40.456219 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:26:40.456267 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:26:40.456278 | orchestrator | 2026-02-28 00:26:40.456288 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-28 00:26:40.456298 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-28 00:26:40.456309 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-28 00:26:40.456320 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-28 00:26:40.456330 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-28 00:26:40.456340 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-28 00:26:40.456350 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-28 00:26:40.456359 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-28 00:26:40.456369 | orchestrator | 2026-02-28 00:26:40.456379 | orchestrator | 2026-02-28 00:26:40.456388 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-28 00:26:40.456398 | orchestrator | Saturday 28 February 2026 00:26:40 +0000 (0:00:03.884) 0:00:41.084 ***** 2026-02-28 00:26:40.456408 | orchestrator | =============================================================================== 2026-02-28 00:26:40.456418 | orchestrator | osism.commons.repository : Update package cache ------------------------ 16.44s 2026-02-28 00:26:40.456428 | orchestrator | Install required packages (Debian) -------------------------------------- 8.33s 2026-02-28 00:26:40.456437 | orchestrator | Gathers facts about hosts ----------------------------------------------- 3.88s 2026-02-28 00:26:40.456447 | orchestrator | Copy fact files --------------------------------------------------------- 3.75s 2026-02-28 00:26:40.456456 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.45s 2026-02-28 00:26:40.456466 | orchestrator | Create custom facts directory ------------------------------------------- 1.39s 2026-02-28 00:26:40.456486 | orchestrator | Copy fact file ---------------------------------------------------------- 1.22s 2026-02-28 00:26:40.666725 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 1.12s 2026-02-28 00:26:40.666827 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 1.07s 2026-02-28 00:26:40.666842 | orchestrator | Create custom facts directory ------------------------------------------- 0.49s 2026-02-28 00:26:40.666925 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.48s 2026-02-28 00:26:40.666940 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.46s 2026-02-28 00:26:40.666951 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.22s 2026-02-28 00:26:40.666962 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.20s 2026-02-28 00:26:40.666974 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.15s 2026-02-28 00:26:40.667013 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.14s 2026-02-28 00:26:40.667041 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.13s 2026-02-28 00:26:40.667053 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.10s 2026-02-28 00:26:41.013833 | orchestrator | + osism apply bootstrap 2026-02-28 00:26:53.119735 | orchestrator | 2026-02-28 00:26:53 | INFO  | Prepare task for execution of bootstrap. 2026-02-28 00:26:53.193100 | orchestrator | 2026-02-28 00:26:53 | INFO  | Task 081e6794-74c6-4f74-9c82-c9e57bfeb918 (bootstrap) was prepared for execution. 2026-02-28 00:26:53.193201 | orchestrator | 2026-02-28 00:26:53 | INFO  | It takes a moment until task 081e6794-74c6-4f74-9c82-c9e57bfeb918 (bootstrap) has been started and output is visible here. 2026-02-28 00:27:09.868536 | orchestrator | 2026-02-28 00:27:09.868684 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2026-02-28 00:27:09.868714 | orchestrator | 2026-02-28 00:27:09.868734 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2026-02-28 00:27:09.868755 | orchestrator | Saturday 28 February 2026 00:26:57 +0000 (0:00:00.153) 0:00:00.153 ***** 2026-02-28 00:27:09.868774 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:27:09.868793 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:27:09.868809 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:27:09.868825 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:27:09.868843 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:27:09.868862 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:27:09.868938 | orchestrator | ok: [testbed-manager] 2026-02-28 00:27:09.868961 | orchestrator | 2026-02-28 00:27:09.868979 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-02-28 00:27:09.868999 | orchestrator | 2026-02-28 00:27:09.869011 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-02-28 00:27:09.869023 | orchestrator | Saturday 28 February 2026 00:26:58 +0000 (0:00:00.335) 0:00:00.489 ***** 2026-02-28 00:27:09.869034 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:27:09.869045 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:27:09.869056 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:27:09.869070 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:27:09.869083 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:27:09.869095 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:27:09.869108 | orchestrator | ok: [testbed-manager] 2026-02-28 00:27:09.869127 | orchestrator | 2026-02-28 00:27:09.869145 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2026-02-28 00:27:09.869164 | orchestrator | 2026-02-28 00:27:09.869183 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-02-28 00:27:09.869203 | orchestrator | Saturday 28 February 2026 00:27:01 +0000 (0:00:03.840) 0:00:04.329 ***** 2026-02-28 00:27:09.869225 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-28 00:27:09.869244 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-28 00:27:09.869260 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-28 00:27:09.869271 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-02-28 00:27:09.869287 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-02-28 00:27:09.869306 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-02-28 00:27:09.869324 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-02-28 00:27:09.869343 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-02-28 00:27:09.869363 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-02-28 00:27:09.869382 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-02-28 00:27:09.869401 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-02-28 00:27:09.869413 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2026-02-28 00:27:09.869424 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-02-28 00:27:09.869460 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:27:09.869471 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-02-28 00:27:09.869482 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-02-28 00:27:09.869493 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-02-28 00:27:09.869504 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-02-28 00:27:09.869515 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-02-28 00:27:09.869525 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-02-28 00:27:09.869536 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-02-28 00:27:09.869547 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2026-02-28 00:27:09.869558 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:27:09.869568 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-02-28 00:27:09.869579 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-28 00:27:09.869590 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-02-28 00:27:09.869601 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-02-28 00:27:09.869611 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-28 00:27:09.869622 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-02-28 00:27:09.869633 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-02-28 00:27:09.869644 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-02-28 00:27:09.869654 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-28 00:27:09.869665 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-02-28 00:27:09.869676 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-02-28 00:27:09.869687 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2026-02-28 00:27:09.869697 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:27:09.869708 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-02-28 00:27:09.869719 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2026-02-28 00:27:09.869730 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:27:09.869741 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-02-28 00:27:09.869752 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-02-28 00:27:09.869763 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-02-28 00:27:09.869774 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-02-28 00:27:09.869785 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-02-28 00:27:09.869795 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-02-28 00:27:09.869806 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-02-28 00:27:09.869836 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-02-28 00:27:09.869848 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-02-28 00:27:09.869859 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2026-02-28 00:27:09.869870 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:27:09.869920 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-02-28 00:27:09.869933 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-02-28 00:27:09.869944 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-02-28 00:27:09.869955 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:27:09.869966 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2026-02-28 00:27:09.869976 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:27:09.869987 | orchestrator | 2026-02-28 00:27:09.869998 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2026-02-28 00:27:09.870009 | orchestrator | 2026-02-28 00:27:09.870082 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2026-02-28 00:27:09.870106 | orchestrator | Saturday 28 February 2026 00:27:02 +0000 (0:00:00.455) 0:00:04.784 ***** 2026-02-28 00:27:09.870117 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:27:09.870128 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:27:09.870139 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:27:09.870150 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:27:09.870161 | orchestrator | ok: [testbed-manager] 2026-02-28 00:27:09.870171 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:27:09.870182 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:27:09.870193 | orchestrator | 2026-02-28 00:27:09.870204 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2026-02-28 00:27:09.870216 | orchestrator | Saturday 28 February 2026 00:27:03 +0000 (0:00:01.224) 0:00:06.009 ***** 2026-02-28 00:27:09.870227 | orchestrator | ok: [testbed-manager] 2026-02-28 00:27:09.870238 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:27:09.870249 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:27:09.870259 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:27:09.870270 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:27:09.870281 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:27:09.870292 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:27:09.870303 | orchestrator | 2026-02-28 00:27:09.870314 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2026-02-28 00:27:09.870325 | orchestrator | Saturday 28 February 2026 00:27:04 +0000 (0:00:01.328) 0:00:07.338 ***** 2026-02-28 00:27:09.870337 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2, testbed-manager 2026-02-28 00:27:09.870351 | orchestrator | 2026-02-28 00:27:09.870362 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2026-02-28 00:27:09.870374 | orchestrator | Saturday 28 February 2026 00:27:05 +0000 (0:00:00.296) 0:00:07.634 ***** 2026-02-28 00:27:09.870384 | orchestrator | changed: [testbed-manager] 2026-02-28 00:27:09.870396 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:27:09.870406 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:27:09.870417 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:27:09.870428 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:27:09.870439 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:27:09.870450 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:27:09.870461 | orchestrator | 2026-02-28 00:27:09.870472 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2026-02-28 00:27:09.870483 | orchestrator | Saturday 28 February 2026 00:27:07 +0000 (0:00:02.123) 0:00:09.758 ***** 2026-02-28 00:27:09.870494 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:27:09.870507 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:27:09.870519 | orchestrator | 2026-02-28 00:27:09.870530 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2026-02-28 00:27:09.870541 | orchestrator | Saturday 28 February 2026 00:27:07 +0000 (0:00:00.297) 0:00:10.055 ***** 2026-02-28 00:27:09.870552 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:27:09.870563 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:27:09.870574 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:27:09.870585 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:27:09.870596 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:27:09.870606 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:27:09.870617 | orchestrator | 2026-02-28 00:27:09.870645 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2026-02-28 00:27:09.870657 | orchestrator | Saturday 28 February 2026 00:27:08 +0000 (0:00:01.023) 0:00:11.079 ***** 2026-02-28 00:27:09.870668 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:27:09.870679 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:27:09.870697 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:27:09.870708 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:27:09.870719 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:27:09.870730 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:27:09.870741 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:27:09.870752 | orchestrator | 2026-02-28 00:27:09.870763 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2026-02-28 00:27:09.870782 | orchestrator | Saturday 28 February 2026 00:27:09 +0000 (0:00:00.535) 0:00:11.614 ***** 2026-02-28 00:27:09.870801 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:27:09.870819 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:27:09.870837 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:27:09.870855 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:27:09.870870 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:27:09.870933 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:27:09.870953 | orchestrator | ok: [testbed-manager] 2026-02-28 00:27:09.870971 | orchestrator | 2026-02-28 00:27:09.870986 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-02-28 00:27:09.870999 | orchestrator | Saturday 28 February 2026 00:27:09 +0000 (0:00:00.569) 0:00:12.183 ***** 2026-02-28 00:27:09.871010 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:27:09.871020 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:27:09.871043 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:27:21.880549 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:27:21.880654 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:27:21.880668 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:27:21.880679 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:27:21.880690 | orchestrator | 2026-02-28 00:27:21.880701 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-02-28 00:27:21.880713 | orchestrator | Saturday 28 February 2026 00:27:09 +0000 (0:00:00.206) 0:00:12.390 ***** 2026-02-28 00:27:21.880725 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2, testbed-manager 2026-02-28 00:27:21.880751 | orchestrator | 2026-02-28 00:27:21.880762 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-02-28 00:27:21.880773 | orchestrator | Saturday 28 February 2026 00:27:10 +0000 (0:00:00.322) 0:00:12.712 ***** 2026-02-28 00:27:21.880783 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2, testbed-manager 2026-02-28 00:27:21.880793 | orchestrator | 2026-02-28 00:27:21.880803 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-02-28 00:27:21.880813 | orchestrator | Saturday 28 February 2026 00:27:10 +0000 (0:00:00.446) 0:00:13.159 ***** 2026-02-28 00:27:21.880823 | orchestrator | ok: [testbed-manager] 2026-02-28 00:27:21.880834 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:27:21.880844 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:27:21.880853 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:27:21.880863 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:27:21.880873 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:27:21.880883 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:27:21.880938 | orchestrator | 2026-02-28 00:27:21.880950 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-02-28 00:27:21.880960 | orchestrator | Saturday 28 February 2026 00:27:12 +0000 (0:00:01.325) 0:00:14.485 ***** 2026-02-28 00:27:21.880975 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:27:21.880993 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:27:21.881018 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:27:21.881038 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:27:21.881055 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:27:21.881098 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:27:21.881117 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:27:21.881135 | orchestrator | 2026-02-28 00:27:21.881152 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-02-28 00:27:21.881170 | orchestrator | Saturday 28 February 2026 00:27:12 +0000 (0:00:00.228) 0:00:14.713 ***** 2026-02-28 00:27:21.881189 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:27:21.881207 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:27:21.881230 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:27:21.881248 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:27:21.881267 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:27:21.881285 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:27:21.881303 | orchestrator | ok: [testbed-manager] 2026-02-28 00:27:21.881314 | orchestrator | 2026-02-28 00:27:21.881325 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-02-28 00:27:21.881336 | orchestrator | Saturday 28 February 2026 00:27:12 +0000 (0:00:00.588) 0:00:15.302 ***** 2026-02-28 00:27:21.881347 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:27:21.881358 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:27:21.881369 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:27:21.881381 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:27:21.881392 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:27:21.881403 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:27:21.881414 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:27:21.881425 | orchestrator | 2026-02-28 00:27:21.881436 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-02-28 00:27:21.881448 | orchestrator | Saturday 28 February 2026 00:27:13 +0000 (0:00:00.269) 0:00:15.571 ***** 2026-02-28 00:27:21.881457 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:27:21.881467 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:27:21.881477 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:27:21.881486 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:27:21.881496 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:27:21.881505 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:27:21.881515 | orchestrator | ok: [testbed-manager] 2026-02-28 00:27:21.881525 | orchestrator | 2026-02-28 00:27:21.881534 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-02-28 00:27:21.881544 | orchestrator | Saturday 28 February 2026 00:27:13 +0000 (0:00:00.582) 0:00:16.154 ***** 2026-02-28 00:27:21.881554 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:27:21.881563 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:27:21.881573 | orchestrator | ok: [testbed-manager] 2026-02-28 00:27:21.881582 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:27:21.881592 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:27:21.881601 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:27:21.881611 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:27:21.881620 | orchestrator | 2026-02-28 00:27:21.881640 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-02-28 00:27:21.881650 | orchestrator | Saturday 28 February 2026 00:27:14 +0000 (0:00:01.178) 0:00:17.333 ***** 2026-02-28 00:27:21.881660 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:27:21.881670 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:27:21.881679 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:27:21.881689 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:27:21.881698 | orchestrator | ok: [testbed-manager] 2026-02-28 00:27:21.881708 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:27:21.881717 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:27:21.881727 | orchestrator | 2026-02-28 00:27:21.881737 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-02-28 00:27:21.881746 | orchestrator | Saturday 28 February 2026 00:27:15 +0000 (0:00:01.055) 0:00:18.388 ***** 2026-02-28 00:27:21.881776 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2, testbed-manager 2026-02-28 00:27:21.881800 | orchestrator | 2026-02-28 00:27:21.881824 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-02-28 00:27:21.881842 | orchestrator | Saturday 28 February 2026 00:27:16 +0000 (0:00:00.307) 0:00:18.696 ***** 2026-02-28 00:27:21.881856 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:27:21.881870 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:27:21.881884 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:27:21.881927 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:27:21.881942 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:27:21.881957 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:27:21.881971 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:27:21.881985 | orchestrator | 2026-02-28 00:27:21.882000 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-02-28 00:27:21.882082 | orchestrator | Saturday 28 February 2026 00:27:17 +0000 (0:00:01.218) 0:00:19.915 ***** 2026-02-28 00:27:21.882096 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:27:21.882108 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:27:21.882125 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:27:21.882140 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:27:21.882157 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:27:21.882174 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:27:21.882191 | orchestrator | ok: [testbed-manager] 2026-02-28 00:27:21.882208 | orchestrator | 2026-02-28 00:27:21.882221 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-02-28 00:27:21.882231 | orchestrator | Saturday 28 February 2026 00:27:17 +0000 (0:00:00.223) 0:00:20.138 ***** 2026-02-28 00:27:21.882241 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:27:21.882250 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:27:21.882260 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:27:21.882269 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:27:21.882279 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:27:21.882288 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:27:21.882298 | orchestrator | ok: [testbed-manager] 2026-02-28 00:27:21.882307 | orchestrator | 2026-02-28 00:27:21.882317 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-02-28 00:27:21.882326 | orchestrator | Saturday 28 February 2026 00:27:17 +0000 (0:00:00.239) 0:00:20.378 ***** 2026-02-28 00:27:21.882336 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:27:21.882346 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:27:21.882355 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:27:21.882365 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:27:21.882374 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:27:21.882383 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:27:21.882393 | orchestrator | ok: [testbed-manager] 2026-02-28 00:27:21.882403 | orchestrator | 2026-02-28 00:27:21.882412 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-02-28 00:27:21.882422 | orchestrator | Saturday 28 February 2026 00:27:18 +0000 (0:00:00.232) 0:00:20.610 ***** 2026-02-28 00:27:21.882433 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2, testbed-manager 2026-02-28 00:27:21.882444 | orchestrator | 2026-02-28 00:27:21.882454 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-02-28 00:27:21.882463 | orchestrator | Saturday 28 February 2026 00:27:18 +0000 (0:00:00.309) 0:00:20.919 ***** 2026-02-28 00:27:21.882473 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:27:21.882483 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:27:21.882492 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:27:21.882502 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:27:21.882511 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:27:21.882520 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:27:21.882530 | orchestrator | ok: [testbed-manager] 2026-02-28 00:27:21.882539 | orchestrator | 2026-02-28 00:27:21.882559 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-02-28 00:27:21.882569 | orchestrator | Saturday 28 February 2026 00:27:18 +0000 (0:00:00.525) 0:00:21.445 ***** 2026-02-28 00:27:21.882578 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:27:21.882588 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:27:21.882597 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:27:21.882607 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:27:21.882616 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:27:21.882626 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:27:21.882635 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:27:21.882645 | orchestrator | 2026-02-28 00:27:21.882654 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-02-28 00:27:21.882664 | orchestrator | Saturday 28 February 2026 00:27:19 +0000 (0:00:00.236) 0:00:21.682 ***** 2026-02-28 00:27:21.882674 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:27:21.882683 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:27:21.882693 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:27:21.882702 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:27:21.882712 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:27:21.882721 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:27:21.882731 | orchestrator | ok: [testbed-manager] 2026-02-28 00:27:21.882740 | orchestrator | 2026-02-28 00:27:21.882750 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-02-28 00:27:21.882760 | orchestrator | Saturday 28 February 2026 00:27:20 +0000 (0:00:01.016) 0:00:22.698 ***** 2026-02-28 00:27:21.882770 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:27:21.882779 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:27:21.882789 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:27:21.882798 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:27:21.882808 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:27:21.882817 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:27:21.882827 | orchestrator | ok: [testbed-manager] 2026-02-28 00:27:21.882836 | orchestrator | 2026-02-28 00:27:21.882846 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-02-28 00:27:21.882856 | orchestrator | Saturday 28 February 2026 00:27:20 +0000 (0:00:00.548) 0:00:23.247 ***** 2026-02-28 00:27:21.882865 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:27:21.882875 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:27:21.882884 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:27:21.882916 | orchestrator | ok: [testbed-manager] 2026-02-28 00:27:21.882936 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:28:03.292650 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:28:03.292758 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:28:03.292773 | orchestrator | 2026-02-28 00:28:03.292785 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-02-28 00:28:03.292796 | orchestrator | Saturday 28 February 2026 00:27:21 +0000 (0:00:01.119) 0:00:24.366 ***** 2026-02-28 00:28:03.292806 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:28:03.292817 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:28:03.292827 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:28:03.292837 | orchestrator | changed: [testbed-manager] 2026-02-28 00:28:03.292847 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:28:03.292857 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:28:03.292866 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:28:03.292877 | orchestrator | 2026-02-28 00:28:03.292887 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2026-02-28 00:28:03.292897 | orchestrator | Saturday 28 February 2026 00:27:37 +0000 (0:00:15.960) 0:00:40.326 ***** 2026-02-28 00:28:03.292907 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:28:03.292976 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:28:03.292988 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:28:03.292998 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:28:03.293008 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:28:03.293017 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:28:03.293027 | orchestrator | ok: [testbed-manager] 2026-02-28 00:28:03.293063 | orchestrator | 2026-02-28 00:28:03.293074 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2026-02-28 00:28:03.293084 | orchestrator | Saturday 28 February 2026 00:27:38 +0000 (0:00:00.239) 0:00:40.566 ***** 2026-02-28 00:28:03.293094 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:28:03.293103 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:28:03.293113 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:28:03.293123 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:28:03.293133 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:28:03.293142 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:28:03.293152 | orchestrator | ok: [testbed-manager] 2026-02-28 00:28:03.293162 | orchestrator | 2026-02-28 00:28:03.293172 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2026-02-28 00:28:03.293182 | orchestrator | Saturday 28 February 2026 00:27:38 +0000 (0:00:00.240) 0:00:40.806 ***** 2026-02-28 00:28:03.293192 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:28:03.293204 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:28:03.293216 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:28:03.293228 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:28:03.293239 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:28:03.293251 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:28:03.293263 | orchestrator | ok: [testbed-manager] 2026-02-28 00:28:03.293274 | orchestrator | 2026-02-28 00:28:03.293285 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2026-02-28 00:28:03.293296 | orchestrator | Saturday 28 February 2026 00:27:38 +0000 (0:00:00.233) 0:00:41.040 ***** 2026-02-28 00:28:03.293309 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2, testbed-manager 2026-02-28 00:28:03.293323 | orchestrator | 2026-02-28 00:28:03.293335 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2026-02-28 00:28:03.293347 | orchestrator | Saturday 28 February 2026 00:27:38 +0000 (0:00:00.293) 0:00:41.334 ***** 2026-02-28 00:28:03.293358 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:28:03.293369 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:28:03.293381 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:28:03.293392 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:28:03.293404 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:28:03.293415 | orchestrator | ok: [testbed-manager] 2026-02-28 00:28:03.293427 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:28:03.293439 | orchestrator | 2026-02-28 00:28:03.293467 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2026-02-28 00:28:03.293479 | orchestrator | Saturday 28 February 2026 00:27:40 +0000 (0:00:01.782) 0:00:43.116 ***** 2026-02-28 00:28:03.293490 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:28:03.293502 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:28:03.293514 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:28:03.293526 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:28:03.293537 | orchestrator | changed: [testbed-manager] 2026-02-28 00:28:03.293549 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:28:03.293559 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:28:03.293569 | orchestrator | 2026-02-28 00:28:03.293579 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2026-02-28 00:28:03.293589 | orchestrator | Saturday 28 February 2026 00:27:41 +0000 (0:00:01.035) 0:00:44.152 ***** 2026-02-28 00:28:03.293598 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:28:03.293608 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:28:03.293618 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:28:03.293628 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:28:03.293638 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:28:03.293647 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:28:03.293657 | orchestrator | ok: [testbed-manager] 2026-02-28 00:28:03.293667 | orchestrator | 2026-02-28 00:28:03.293677 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2026-02-28 00:28:03.293707 | orchestrator | Saturday 28 February 2026 00:27:42 +0000 (0:00:00.815) 0:00:44.968 ***** 2026-02-28 00:28:03.293884 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2, testbed-manager 2026-02-28 00:28:03.293900 | orchestrator | 2026-02-28 00:28:03.293910 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2026-02-28 00:28:03.293938 | orchestrator | Saturday 28 February 2026 00:27:42 +0000 (0:00:00.320) 0:00:45.288 ***** 2026-02-28 00:28:03.293948 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:28:03.293958 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:28:03.293968 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:28:03.293977 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:28:03.293987 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:28:03.293997 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:28:03.294006 | orchestrator | changed: [testbed-manager] 2026-02-28 00:28:03.294071 | orchestrator | 2026-02-28 00:28:03.294100 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2026-02-28 00:28:03.294111 | orchestrator | Saturday 28 February 2026 00:27:43 +0000 (0:00:00.970) 0:00:46.259 ***** 2026-02-28 00:28:03.294120 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:28:03.294130 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:28:03.294140 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:28:03.294150 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:28:03.294160 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:28:03.294169 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:28:03.294179 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:28:03.294188 | orchestrator | 2026-02-28 00:28:03.294198 | orchestrator | TASK [osism.services.rsyslog : Include logrotate tasks] ************************ 2026-02-28 00:28:03.294208 | orchestrator | Saturday 28 February 2026 00:27:44 +0000 (0:00:00.234) 0:00:46.493 ***** 2026-02-28 00:28:03.294218 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/logrotate.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2, testbed-manager 2026-02-28 00:28:03.294227 | orchestrator | 2026-02-28 00:28:03.294237 | orchestrator | TASK [osism.services.rsyslog : Ensure logrotate package is installed] ********** 2026-02-28 00:28:03.294247 | orchestrator | Saturday 28 February 2026 00:27:44 +0000 (0:00:00.351) 0:00:46.844 ***** 2026-02-28 00:28:03.294257 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:28:03.294266 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:28:03.294276 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:28:03.294285 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:28:03.294295 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:28:03.294305 | orchestrator | ok: [testbed-manager] 2026-02-28 00:28:03.294314 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:28:03.294324 | orchestrator | 2026-02-28 00:28:03.294333 | orchestrator | TASK [osism.services.rsyslog : Configure logrotate for rsyslog] **************** 2026-02-28 00:28:03.294343 | orchestrator | Saturday 28 February 2026 00:27:46 +0000 (0:00:01.727) 0:00:48.572 ***** 2026-02-28 00:28:03.294353 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:28:03.294363 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:28:03.294372 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:28:03.294382 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:28:03.294392 | orchestrator | changed: [testbed-manager] 2026-02-28 00:28:03.294401 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:28:03.294411 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:28:03.294420 | orchestrator | 2026-02-28 00:28:03.294430 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2026-02-28 00:28:03.294440 | orchestrator | Saturday 28 February 2026 00:27:47 +0000 (0:00:01.142) 0:00:49.715 ***** 2026-02-28 00:28:03.294449 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:28:03.294459 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:28:03.294477 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:28:03.294487 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:28:03.294497 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:28:03.294506 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:28:03.294516 | orchestrator | changed: [testbed-manager] 2026-02-28 00:28:03.294526 | orchestrator | 2026-02-28 00:28:03.294535 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2026-02-28 00:28:03.294545 | orchestrator | Saturday 28 February 2026 00:28:00 +0000 (0:00:13.100) 0:01:02.815 ***** 2026-02-28 00:28:03.294555 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:28:03.294564 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:28:03.294574 | orchestrator | ok: [testbed-manager] 2026-02-28 00:28:03.294584 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:28:03.294593 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:28:03.294603 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:28:03.294612 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:28:03.294622 | orchestrator | 2026-02-28 00:28:03.294632 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2026-02-28 00:28:03.294641 | orchestrator | Saturday 28 February 2026 00:28:01 +0000 (0:00:01.280) 0:01:04.095 ***** 2026-02-28 00:28:03.294651 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:28:03.294661 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:28:03.294670 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:28:03.294680 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:28:03.294689 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:28:03.294699 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:28:03.294709 | orchestrator | ok: [testbed-manager] 2026-02-28 00:28:03.294718 | orchestrator | 2026-02-28 00:28:03.294728 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2026-02-28 00:28:03.294738 | orchestrator | Saturday 28 February 2026 00:28:02 +0000 (0:00:00.871) 0:01:04.966 ***** 2026-02-28 00:28:03.294747 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:28:03.294757 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:28:03.294766 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:28:03.294776 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:28:03.294785 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:28:03.294795 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:28:03.294805 | orchestrator | ok: [testbed-manager] 2026-02-28 00:28:03.294814 | orchestrator | 2026-02-28 00:28:03.294824 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2026-02-28 00:28:03.294834 | orchestrator | Saturday 28 February 2026 00:28:02 +0000 (0:00:00.217) 0:01:05.184 ***** 2026-02-28 00:28:03.294844 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:28:03.294858 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:28:03.294868 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:28:03.294878 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:28:03.294887 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:28:03.294897 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:28:03.294906 | orchestrator | ok: [testbed-manager] 2026-02-28 00:28:03.294916 | orchestrator | 2026-02-28 00:28:03.294941 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2026-02-28 00:28:03.294951 | orchestrator | Saturday 28 February 2026 00:28:02 +0000 (0:00:00.243) 0:01:05.428 ***** 2026-02-28 00:28:03.294961 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2, testbed-manager 2026-02-28 00:28:03.294971 | orchestrator | 2026-02-28 00:28:03.294997 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2026-02-28 00:30:32.214210 | orchestrator | Saturday 28 February 2026 00:28:03 +0000 (0:00:00.305) 0:01:05.733 ***** 2026-02-28 00:30:32.214324 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:30:32.214341 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:30:32.214353 | orchestrator | ok: [testbed-manager] 2026-02-28 00:30:32.214364 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:30:32.214402 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:30:32.214414 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:30:32.214425 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:30:32.214436 | orchestrator | 2026-02-28 00:30:32.214448 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2026-02-28 00:30:32.214459 | orchestrator | Saturday 28 February 2026 00:28:05 +0000 (0:00:01.946) 0:01:07.680 ***** 2026-02-28 00:30:32.214470 | orchestrator | changed: [testbed-manager] 2026-02-28 00:30:32.214483 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:30:32.214494 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:30:32.214505 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:30:32.214516 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:30:32.214527 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:30:32.214538 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:30:32.214549 | orchestrator | 2026-02-28 00:30:32.214560 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2026-02-28 00:30:32.214572 | orchestrator | Saturday 28 February 2026 00:28:05 +0000 (0:00:00.619) 0:01:08.299 ***** 2026-02-28 00:30:32.214583 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:30:32.214594 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:30:32.214605 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:30:32.214616 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:30:32.214627 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:30:32.214641 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:30:32.214653 | orchestrator | ok: [testbed-manager] 2026-02-28 00:30:32.214665 | orchestrator | 2026-02-28 00:30:32.214679 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2026-02-28 00:30:32.214692 | orchestrator | Saturday 28 February 2026 00:28:06 +0000 (0:00:00.264) 0:01:08.564 ***** 2026-02-28 00:30:32.214704 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:30:32.214716 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:30:32.214729 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:30:32.214741 | orchestrator | ok: [testbed-manager] 2026-02-28 00:30:32.214754 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:30:32.214766 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:30:32.214778 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:30:32.214791 | orchestrator | 2026-02-28 00:30:32.214804 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2026-02-28 00:30:32.214817 | orchestrator | Saturday 28 February 2026 00:28:07 +0000 (0:00:01.177) 0:01:09.742 ***** 2026-02-28 00:30:32.214828 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:30:32.214840 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:30:32.214851 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:30:32.214862 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:30:32.214873 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:30:32.214883 | orchestrator | changed: [testbed-manager] 2026-02-28 00:30:32.214895 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:30:32.214906 | orchestrator | 2026-02-28 00:30:32.214917 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2026-02-28 00:30:32.214928 | orchestrator | Saturday 28 February 2026 00:28:09 +0000 (0:00:01.964) 0:01:11.706 ***** 2026-02-28 00:30:32.214939 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:30:32.214950 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:30:32.214961 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:30:32.214972 | orchestrator | ok: [testbed-manager] 2026-02-28 00:30:32.214983 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:30:32.214994 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:30:32.215005 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:30:32.215108 | orchestrator | 2026-02-28 00:30:32.215120 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2026-02-28 00:30:32.215131 | orchestrator | Saturday 28 February 2026 00:28:11 +0000 (0:00:02.628) 0:01:14.335 ***** 2026-02-28 00:30:32.215142 | orchestrator | ok: [testbed-manager] 2026-02-28 00:30:32.215153 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:30:32.215164 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:30:32.215182 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:30:32.215192 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:30:32.215201 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:30:32.215211 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:30:32.215220 | orchestrator | 2026-02-28 00:30:32.215230 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2026-02-28 00:30:32.215240 | orchestrator | Saturday 28 February 2026 00:28:45 +0000 (0:00:33.622) 0:01:47.957 ***** 2026-02-28 00:30:32.215250 | orchestrator | changed: [testbed-manager] 2026-02-28 00:30:32.215260 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:30:32.215269 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:30:32.215279 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:30:32.215288 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:30:32.215298 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:30:32.215308 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:30:32.215317 | orchestrator | 2026-02-28 00:30:32.215327 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2026-02-28 00:30:32.215337 | orchestrator | Saturday 28 February 2026 00:30:14 +0000 (0:01:28.566) 0:03:16.524 ***** 2026-02-28 00:30:32.215346 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:30:32.215356 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:30:32.215366 | orchestrator | ok: [testbed-manager] 2026-02-28 00:30:32.215375 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:30:32.215398 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:30:32.215408 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:30:32.215418 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:30:32.215428 | orchestrator | 2026-02-28 00:30:32.215438 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2026-02-28 00:30:32.215448 | orchestrator | Saturday 28 February 2026 00:30:15 +0000 (0:00:01.762) 0:03:18.286 ***** 2026-02-28 00:30:32.215457 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:30:32.215467 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:30:32.215476 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:30:32.215486 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:30:32.215495 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:30:32.215505 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:30:32.215514 | orchestrator | changed: [testbed-manager] 2026-02-28 00:30:32.215524 | orchestrator | 2026-02-28 00:30:32.215534 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2026-02-28 00:30:32.215544 | orchestrator | Saturday 28 February 2026 00:30:29 +0000 (0:00:13.234) 0:03:31.521 ***** 2026-02-28 00:30:32.215581 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2, testbed-manager => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2026-02-28 00:30:32.215602 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2, testbed-manager => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2026-02-28 00:30:32.215616 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2, testbed-manager => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2026-02-28 00:30:32.215638 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2, testbed-manager => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-02-28 00:30:32.215649 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2, testbed-manager => (item={'key': 'network', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-02-28 00:30:32.215659 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2, testbed-manager => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2026-02-28 00:30:32.215669 | orchestrator | 2026-02-28 00:30:32.215679 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2026-02-28 00:30:32.215689 | orchestrator | Saturday 28 February 2026 00:30:29 +0000 (0:00:00.392) 0:03:31.913 ***** 2026-02-28 00:30:32.215700 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-02-28 00:30:32.215709 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-02-28 00:30:32.215719 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:30:32.215729 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:30:32.215738 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-02-28 00:30:32.215748 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:30:32.215757 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-02-28 00:30:32.215767 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:30:32.215777 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-02-28 00:30:32.215786 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-02-28 00:30:32.215796 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-02-28 00:30:32.215806 | orchestrator | 2026-02-28 00:30:32.215822 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2026-02-28 00:30:32.215832 | orchestrator | Saturday 28 February 2026 00:30:32 +0000 (0:00:02.680) 0:03:34.593 ***** 2026-02-28 00:30:32.215842 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-02-28 00:30:32.215852 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-02-28 00:30:32.215863 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-02-28 00:30:32.215872 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-02-28 00:30:32.215882 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-02-28 00:30:32.215898 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-02-28 00:30:38.823704 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-02-28 00:30:38.823841 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-02-28 00:30:38.823869 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-02-28 00:30:38.823887 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-02-28 00:30:38.823906 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-02-28 00:30:38.823954 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-02-28 00:30:38.823975 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-02-28 00:30:38.823995 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-02-28 00:30:38.824046 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-02-28 00:30:38.824066 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-02-28 00:30:38.824085 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:30:38.824104 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-02-28 00:30:38.824124 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-02-28 00:30:38.824143 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-02-28 00:30:38.824160 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-02-28 00:30:38.824179 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-02-28 00:30:38.824198 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-02-28 00:30:38.824217 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-02-28 00:30:38.824238 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-02-28 00:30:38.824257 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-02-28 00:30:38.824275 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:30:38.824294 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-02-28 00:30:38.824314 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-02-28 00:30:38.824337 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-02-28 00:30:38.824359 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-02-28 00:30:38.824379 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-02-28 00:30:38.824402 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:30:38.824423 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-02-28 00:30:38.824445 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-02-28 00:30:38.824465 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-02-28 00:30:38.824484 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-02-28 00:30:38.824504 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-02-28 00:30:38.824523 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-02-28 00:30:38.824543 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-02-28 00:30:38.824562 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-02-28 00:30:38.824600 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-02-28 00:30:38.824621 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-02-28 00:30:38.824639 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:30:38.824658 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-02-28 00:30:38.824692 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-02-28 00:30:38.824711 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-02-28 00:30:38.824729 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-02-28 00:30:38.824747 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-02-28 00:30:38.824795 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-02-28 00:30:38.824816 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-02-28 00:30:38.824833 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-02-28 00:30:38.824851 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-02-28 00:30:38.824867 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-02-28 00:30:38.824886 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-02-28 00:30:38.824902 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-02-28 00:30:38.824920 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-02-28 00:30:38.824939 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-02-28 00:30:38.824958 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-02-28 00:30:38.824977 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-02-28 00:30:38.824995 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-02-28 00:30:38.825066 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-02-28 00:30:38.825087 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-02-28 00:30:38.825104 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-02-28 00:30:38.825121 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-02-28 00:30:38.825138 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-02-28 00:30:38.825155 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-02-28 00:30:38.825174 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-02-28 00:30:38.825192 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-02-28 00:30:38.825212 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-02-28 00:30:38.825231 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-02-28 00:30:38.825250 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-02-28 00:30:38.825268 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-02-28 00:30:38.825287 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-02-28 00:30:38.825307 | orchestrator | 2026-02-28 00:30:38.825327 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2026-02-28 00:30:38.825346 | orchestrator | Saturday 28 February 2026 00:30:36 +0000 (0:00:04.797) 0:03:39.391 ***** 2026-02-28 00:30:38.825366 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-02-28 00:30:38.825400 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-02-28 00:30:38.825421 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-02-28 00:30:38.825439 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-02-28 00:30:38.825459 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-02-28 00:30:38.825477 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-02-28 00:30:38.825497 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-02-28 00:30:38.825509 | orchestrator | 2026-02-28 00:30:38.825520 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2026-02-28 00:30:38.825531 | orchestrator | Saturday 28 February 2026 00:30:38 +0000 (0:00:01.441) 0:03:40.832 ***** 2026-02-28 00:30:38.825552 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-02-28 00:30:38.825563 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-02-28 00:30:38.825574 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:30:38.825585 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-02-28 00:30:38.825596 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:30:38.825607 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:30:38.825618 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-02-28 00:30:38.825629 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:30:38.825640 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-02-28 00:30:38.825651 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-02-28 00:30:38.825682 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-02-28 00:30:52.592865 | orchestrator | 2026-02-28 00:30:52.592982 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on network] ***************** 2026-02-28 00:30:52.593000 | orchestrator | Saturday 28 February 2026 00:30:38 +0000 (0:00:00.464) 0:03:41.297 ***** 2026-02-28 00:30:52.593014 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-02-28 00:30:52.593075 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:30:52.593089 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-02-28 00:30:52.593103 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:30:52.593114 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-02-28 00:30:52.593128 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:30:52.593141 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-02-28 00:30:52.593155 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:30:52.593168 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-02-28 00:30:52.593181 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-02-28 00:30:52.593193 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-02-28 00:30:52.593205 | orchestrator | 2026-02-28 00:30:52.593216 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2026-02-28 00:30:52.593228 | orchestrator | Saturday 28 February 2026 00:30:39 +0000 (0:00:00.570) 0:03:41.868 ***** 2026-02-28 00:30:52.593241 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-02-28 00:30:52.593255 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-02-28 00:30:52.593293 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:30:52.593307 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-02-28 00:30:52.593320 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:30:52.593334 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:30:52.593347 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-02-28 00:30:52.593361 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:30:52.593375 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-02-28 00:30:52.593389 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-02-28 00:30:52.593404 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-02-28 00:30:52.593419 | orchestrator | 2026-02-28 00:30:52.593434 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2026-02-28 00:30:52.593449 | orchestrator | Saturday 28 February 2026 00:30:40 +0000 (0:00:01.534) 0:03:43.402 ***** 2026-02-28 00:30:52.593464 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:30:52.593480 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:30:52.593493 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:30:52.593504 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:30:52.593516 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:30:52.593529 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:30:52.593544 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:30:52.593559 | orchestrator | 2026-02-28 00:30:52.593573 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2026-02-28 00:30:52.593588 | orchestrator | Saturday 28 February 2026 00:30:41 +0000 (0:00:00.305) 0:03:43.708 ***** 2026-02-28 00:30:52.593603 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:30:52.593617 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:30:52.593629 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:30:52.593641 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:30:52.593653 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:30:52.593667 | orchestrator | ok: [testbed-manager] 2026-02-28 00:30:52.593681 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:30:52.593696 | orchestrator | 2026-02-28 00:30:52.593710 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2026-02-28 00:30:52.593726 | orchestrator | Saturday 28 February 2026 00:30:47 +0000 (0:00:05.864) 0:03:49.573 ***** 2026-02-28 00:30:52.593741 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2026-02-28 00:30:52.593756 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2026-02-28 00:30:52.593770 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:30:52.593785 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2026-02-28 00:30:52.593799 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:30:52.593813 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:30:52.593827 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2026-02-28 00:30:52.593840 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:30:52.593854 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2026-02-28 00:30:52.593868 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:30:52.593881 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2026-02-28 00:30:52.593895 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:30:52.593909 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2026-02-28 00:30:52.593922 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:30:52.593936 | orchestrator | 2026-02-28 00:30:52.593950 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2026-02-28 00:30:52.593964 | orchestrator | Saturday 28 February 2026 00:30:47 +0000 (0:00:00.315) 0:03:49.888 ***** 2026-02-28 00:30:52.593977 | orchestrator | ok: [testbed-node-3] => (item=cron) 2026-02-28 00:30:52.593990 | orchestrator | ok: [testbed-node-4] => (item=cron) 2026-02-28 00:30:52.594011 | orchestrator | ok: [testbed-node-0] => (item=cron) 2026-02-28 00:30:52.594119 | orchestrator | ok: [testbed-node-5] => (item=cron) 2026-02-28 00:30:52.594145 | orchestrator | ok: [testbed-node-1] => (item=cron) 2026-02-28 00:30:52.594161 | orchestrator | ok: [testbed-manager] => (item=cron) 2026-02-28 00:30:52.594176 | orchestrator | ok: [testbed-node-2] => (item=cron) 2026-02-28 00:30:52.594190 | orchestrator | 2026-02-28 00:30:52.594205 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2026-02-28 00:30:52.594219 | orchestrator | Saturday 28 February 2026 00:30:48 +0000 (0:00:01.145) 0:03:51.034 ***** 2026-02-28 00:30:52.594235 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2, testbed-manager 2026-02-28 00:30:52.594251 | orchestrator | 2026-02-28 00:30:52.594265 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2026-02-28 00:30:52.594279 | orchestrator | Saturday 28 February 2026 00:30:49 +0000 (0:00:00.429) 0:03:51.463 ***** 2026-02-28 00:30:52.594293 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:30:52.594308 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:30:52.594322 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:30:52.594336 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:30:52.594351 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:30:52.594365 | orchestrator | ok: [testbed-manager] 2026-02-28 00:30:52.594378 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:30:52.594390 | orchestrator | 2026-02-28 00:30:52.594402 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2026-02-28 00:30:52.594415 | orchestrator | Saturday 28 February 2026 00:30:50 +0000 (0:00:01.211) 0:03:52.674 ***** 2026-02-28 00:30:52.594429 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:30:52.594443 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:30:52.594457 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:30:52.594471 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:30:52.594485 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:30:52.594499 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:30:52.594513 | orchestrator | ok: [testbed-manager] 2026-02-28 00:30:52.594528 | orchestrator | 2026-02-28 00:30:52.594542 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2026-02-28 00:30:52.594556 | orchestrator | Saturday 28 February 2026 00:30:50 +0000 (0:00:00.599) 0:03:53.273 ***** 2026-02-28 00:30:52.594570 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:30:52.594585 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:30:52.594599 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:30:52.594612 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:30:52.594627 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:30:52.594642 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:30:52.594656 | orchestrator | changed: [testbed-manager] 2026-02-28 00:30:52.594669 | orchestrator | 2026-02-28 00:30:52.594703 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2026-02-28 00:30:52.594718 | orchestrator | Saturday 28 February 2026 00:30:51 +0000 (0:00:00.629) 0:03:53.902 ***** 2026-02-28 00:30:52.594732 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:30:52.594745 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:30:52.594756 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:30:52.594768 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:30:52.594780 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:30:52.594793 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:30:52.594806 | orchestrator | ok: [testbed-manager] 2026-02-28 00:30:52.594820 | orchestrator | 2026-02-28 00:30:52.594834 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2026-02-28 00:30:52.594848 | orchestrator | Saturday 28 February 2026 00:30:52 +0000 (0:00:00.585) 0:03:54.488 ***** 2026-02-28 00:30:52.594867 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1772237107.575857, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-28 00:30:52.594902 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1772237101.3551464, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-28 00:30:52.594917 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1772237110.4266863, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-28 00:30:52.594943 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1772237107.015875, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-28 00:30:57.923278 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1772237121.9366243, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-28 00:30:57.923417 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1772237108.0452652, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-28 00:30:57.923441 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1772237087.0479877, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-28 00:30:57.923468 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-28 00:30:57.923514 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-28 00:30:57.923546 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-28 00:30:57.923561 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-28 00:30:57.923608 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-28 00:30:57.923624 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-28 00:30:57.923638 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-28 00:30:57.923653 | orchestrator | 2026-02-28 00:30:57.923669 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2026-02-28 00:30:57.923693 | orchestrator | Saturday 28 February 2026 00:30:53 +0000 (0:00:00.986) 0:03:55.474 ***** 2026-02-28 00:30:57.923708 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:30:57.923724 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:30:57.923737 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:30:57.923750 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:30:57.923765 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:30:57.923778 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:30:57.923792 | orchestrator | changed: [testbed-manager] 2026-02-28 00:30:57.923806 | orchestrator | 2026-02-28 00:30:57.923820 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2026-02-28 00:30:57.923834 | orchestrator | Saturday 28 February 2026 00:30:54 +0000 (0:00:01.073) 0:03:56.547 ***** 2026-02-28 00:30:57.923849 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:30:57.923863 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:30:57.923877 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:30:57.923890 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:30:57.923903 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:30:57.923918 | orchestrator | changed: [testbed-manager] 2026-02-28 00:30:57.923932 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:30:57.923946 | orchestrator | 2026-02-28 00:30:57.923960 | orchestrator | TASK [osism.commons.motd : Copy issue.net file] ******************************** 2026-02-28 00:30:57.923975 | orchestrator | Saturday 28 February 2026 00:30:55 +0000 (0:00:01.154) 0:03:57.702 ***** 2026-02-28 00:30:57.923988 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:30:57.924001 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:30:57.924015 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:30:57.924080 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:30:57.924094 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:30:57.924108 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:30:57.924122 | orchestrator | changed: [testbed-manager] 2026-02-28 00:30:57.924136 | orchestrator | 2026-02-28 00:30:57.924160 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2026-02-28 00:30:57.924176 | orchestrator | Saturday 28 February 2026 00:30:56 +0000 (0:00:01.151) 0:03:58.853 ***** 2026-02-28 00:30:57.924190 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:30:57.924203 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:30:57.924217 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:30:57.924229 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:30:57.924240 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:30:57.924253 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:30:57.924267 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:30:57.924280 | orchestrator | 2026-02-28 00:30:57.924293 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2026-02-28 00:30:57.924306 | orchestrator | Saturday 28 February 2026 00:30:56 +0000 (0:00:00.302) 0:03:59.157 ***** 2026-02-28 00:30:57.924319 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:30:57.924333 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:30:57.924346 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:30:57.924358 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:30:57.924370 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:30:57.924383 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:30:57.924395 | orchestrator | ok: [testbed-manager] 2026-02-28 00:30:57.924408 | orchestrator | 2026-02-28 00:30:57.924420 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2026-02-28 00:30:57.924431 | orchestrator | Saturday 28 February 2026 00:30:57 +0000 (0:00:00.760) 0:03:59.917 ***** 2026-02-28 00:30:57.924444 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2, testbed-manager 2026-02-28 00:30:57.924457 | orchestrator | 2026-02-28 00:30:57.924468 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2026-02-28 00:30:57.924492 | orchestrator | Saturday 28 February 2026 00:30:57 +0000 (0:00:00.448) 0:04:00.365 ***** 2026-02-28 00:32:15.752944 | orchestrator | ok: [testbed-manager] 2026-02-28 00:32:15.753121 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:32:15.753152 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:32:15.753173 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:32:15.753192 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:32:15.753213 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:32:15.753233 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:32:15.753253 | orchestrator | 2026-02-28 00:32:15.753273 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2026-02-28 00:32:15.753294 | orchestrator | Saturday 28 February 2026 00:31:06 +0000 (0:00:08.337) 0:04:08.703 ***** 2026-02-28 00:32:15.753314 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:32:15.753333 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:32:15.753352 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:32:15.753371 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:32:15.753389 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:32:15.753408 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:32:15.753428 | orchestrator | ok: [testbed-manager] 2026-02-28 00:32:15.753447 | orchestrator | 2026-02-28 00:32:15.753466 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2026-02-28 00:32:15.753486 | orchestrator | Saturday 28 February 2026 00:31:07 +0000 (0:00:01.380) 0:04:10.084 ***** 2026-02-28 00:32:15.753499 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:32:15.753512 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:32:15.753525 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:32:15.753537 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:32:15.753550 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:32:15.753561 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:32:15.753572 | orchestrator | ok: [testbed-manager] 2026-02-28 00:32:15.753583 | orchestrator | 2026-02-28 00:32:15.753595 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2026-02-28 00:32:15.753606 | orchestrator | Saturday 28 February 2026 00:31:08 +0000 (0:00:00.981) 0:04:11.065 ***** 2026-02-28 00:32:15.753617 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:32:15.753628 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:32:15.753639 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:32:15.753650 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:32:15.753661 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:32:15.753672 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:32:15.753683 | orchestrator | ok: [testbed-manager] 2026-02-28 00:32:15.753694 | orchestrator | 2026-02-28 00:32:15.753705 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2026-02-28 00:32:15.753717 | orchestrator | Saturday 28 February 2026 00:31:08 +0000 (0:00:00.317) 0:04:11.382 ***** 2026-02-28 00:32:15.753728 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:32:15.753739 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:32:15.753750 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:32:15.753761 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:32:15.753771 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:32:15.753782 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:32:15.753793 | orchestrator | ok: [testbed-manager] 2026-02-28 00:32:15.753804 | orchestrator | 2026-02-28 00:32:15.753815 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2026-02-28 00:32:15.753827 | orchestrator | Saturday 28 February 2026 00:31:09 +0000 (0:00:00.315) 0:04:11.698 ***** 2026-02-28 00:32:15.753838 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:32:15.753849 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:32:15.753860 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:32:15.753871 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:32:15.753881 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:32:15.753892 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:32:15.753903 | orchestrator | ok: [testbed-manager] 2026-02-28 00:32:15.753914 | orchestrator | 2026-02-28 00:32:15.753926 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2026-02-28 00:32:15.753966 | orchestrator | Saturday 28 February 2026 00:31:09 +0000 (0:00:00.338) 0:04:12.037 ***** 2026-02-28 00:32:15.753978 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:32:15.753989 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:32:15.754000 | orchestrator | ok: [testbed-manager] 2026-02-28 00:32:15.754011 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:32:15.754151 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:32:15.754164 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:32:15.754174 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:32:15.754185 | orchestrator | 2026-02-28 00:32:15.754196 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2026-02-28 00:32:15.754208 | orchestrator | Saturday 28 February 2026 00:31:15 +0000 (0:00:05.648) 0:04:17.685 ***** 2026-02-28 00:32:15.754221 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2, testbed-manager 2026-02-28 00:32:15.754235 | orchestrator | 2026-02-28 00:32:15.754246 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2026-02-28 00:32:15.754257 | orchestrator | Saturday 28 February 2026 00:31:15 +0000 (0:00:00.419) 0:04:18.105 ***** 2026-02-28 00:32:15.754268 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2026-02-28 00:32:15.754279 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2026-02-28 00:32:15.754290 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2026-02-28 00:32:15.754301 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2026-02-28 00:32:15.754312 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:32:15.754323 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:32:15.754334 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2026-02-28 00:32:15.754344 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2026-02-28 00:32:15.754355 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2026-02-28 00:32:15.754366 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2026-02-28 00:32:15.754377 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:32:15.754387 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2026-02-28 00:32:15.754398 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:32:15.754409 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2026-02-28 00:32:15.754420 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2026-02-28 00:32:15.754431 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2026-02-28 00:32:15.754462 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:32:15.754475 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:32:15.754486 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2026-02-28 00:32:15.754497 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2026-02-28 00:32:15.754507 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:32:15.754518 | orchestrator | 2026-02-28 00:32:15.754529 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2026-02-28 00:32:15.754540 | orchestrator | Saturday 28 February 2026 00:31:16 +0000 (0:00:00.352) 0:04:18.457 ***** 2026-02-28 00:32:15.754551 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2, testbed-manager 2026-02-28 00:32:15.754563 | orchestrator | 2026-02-28 00:32:15.754573 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2026-02-28 00:32:15.754584 | orchestrator | Saturday 28 February 2026 00:31:16 +0000 (0:00:00.403) 0:04:18.861 ***** 2026-02-28 00:32:15.754595 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2026-02-28 00:32:15.754606 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2026-02-28 00:32:15.754617 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:32:15.754638 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2026-02-28 00:32:15.754649 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:32:15.754660 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2026-02-28 00:32:15.754671 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:32:15.754682 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2026-02-28 00:32:15.754693 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:32:15.754704 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2026-02-28 00:32:15.754714 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:32:15.754725 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:32:15.754736 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2026-02-28 00:32:15.754747 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:32:15.754758 | orchestrator | 2026-02-28 00:32:15.754769 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2026-02-28 00:32:15.754796 | orchestrator | Saturday 28 February 2026 00:31:16 +0000 (0:00:00.367) 0:04:19.228 ***** 2026-02-28 00:32:15.754808 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2, testbed-manager 2026-02-28 00:32:15.754820 | orchestrator | 2026-02-28 00:32:15.754830 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2026-02-28 00:32:15.754841 | orchestrator | Saturday 28 February 2026 00:31:17 +0000 (0:00:00.450) 0:04:19.679 ***** 2026-02-28 00:32:15.754852 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:32:15.754863 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:32:15.754874 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:32:15.754885 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:32:15.754895 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:32:15.754906 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:32:15.754917 | orchestrator | changed: [testbed-manager] 2026-02-28 00:32:15.754928 | orchestrator | 2026-02-28 00:32:15.754939 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2026-02-28 00:32:15.754950 | orchestrator | Saturday 28 February 2026 00:31:51 +0000 (0:00:34.510) 0:04:54.190 ***** 2026-02-28 00:32:15.754961 | orchestrator | changed: [testbed-manager] 2026-02-28 00:32:15.754971 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:32:15.754982 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:32:15.754993 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:32:15.755003 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:32:15.755019 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:32:15.755030 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:32:15.755041 | orchestrator | 2026-02-28 00:32:15.755052 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2026-02-28 00:32:15.755084 | orchestrator | Saturday 28 February 2026 00:32:00 +0000 (0:00:08.348) 0:05:02.538 ***** 2026-02-28 00:32:15.755095 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:32:15.755106 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:32:15.755117 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:32:15.755127 | orchestrator | changed: [testbed-manager] 2026-02-28 00:32:15.755138 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:32:15.755149 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:32:15.755160 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:32:15.755171 | orchestrator | 2026-02-28 00:32:15.755182 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2026-02-28 00:32:15.755193 | orchestrator | Saturday 28 February 2026 00:32:07 +0000 (0:00:07.866) 0:05:10.405 ***** 2026-02-28 00:32:15.755203 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:32:15.755214 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:32:15.755225 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:32:15.755236 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:32:15.755256 | orchestrator | ok: [testbed-manager] 2026-02-28 00:32:15.755267 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:32:15.755277 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:32:15.755288 | orchestrator | 2026-02-28 00:32:15.755299 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2026-02-28 00:32:15.755310 | orchestrator | Saturday 28 February 2026 00:32:09 +0000 (0:00:01.682) 0:05:12.087 ***** 2026-02-28 00:32:15.755321 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:32:15.755332 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:32:15.755343 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:32:15.755354 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:32:15.755365 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:32:15.755375 | orchestrator | changed: [testbed-manager] 2026-02-28 00:32:15.755386 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:32:15.755397 | orchestrator | 2026-02-28 00:32:15.755415 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2026-02-28 00:32:27.058954 | orchestrator | Saturday 28 February 2026 00:32:15 +0000 (0:00:06.107) 0:05:18.194 ***** 2026-02-28 00:32:27.059096 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2, testbed-manager 2026-02-28 00:32:27.059116 | orchestrator | 2026-02-28 00:32:27.059129 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2026-02-28 00:32:27.059141 | orchestrator | Saturday 28 February 2026 00:32:16 +0000 (0:00:00.472) 0:05:18.667 ***** 2026-02-28 00:32:27.059153 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:32:27.059164 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:32:27.059175 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:32:27.059186 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:32:27.059197 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:32:27.059208 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:32:27.059219 | orchestrator | changed: [testbed-manager] 2026-02-28 00:32:27.059229 | orchestrator | 2026-02-28 00:32:27.059240 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2026-02-28 00:32:27.059251 | orchestrator | Saturday 28 February 2026 00:32:16 +0000 (0:00:00.751) 0:05:19.418 ***** 2026-02-28 00:32:27.059262 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:32:27.059274 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:32:27.059285 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:32:27.059295 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:32:27.059306 | orchestrator | ok: [testbed-manager] 2026-02-28 00:32:27.059317 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:32:27.059328 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:32:27.059339 | orchestrator | 2026-02-28 00:32:27.059350 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2026-02-28 00:32:27.059361 | orchestrator | Saturday 28 February 2026 00:32:18 +0000 (0:00:01.736) 0:05:21.155 ***** 2026-02-28 00:32:27.059372 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:32:27.059383 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:32:27.059393 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:32:27.059404 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:32:27.059415 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:32:27.059426 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:32:27.059436 | orchestrator | changed: [testbed-manager] 2026-02-28 00:32:27.059447 | orchestrator | 2026-02-28 00:32:27.059458 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2026-02-28 00:32:27.059469 | orchestrator | Saturday 28 February 2026 00:32:19 +0000 (0:00:00.794) 0:05:21.949 ***** 2026-02-28 00:32:27.059480 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:32:27.059492 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:32:27.059505 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:32:27.059517 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:32:27.059529 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:32:27.059566 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:32:27.059580 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:32:27.059592 | orchestrator | 2026-02-28 00:32:27.059605 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2026-02-28 00:32:27.059618 | orchestrator | Saturday 28 February 2026 00:32:19 +0000 (0:00:00.280) 0:05:22.230 ***** 2026-02-28 00:32:27.059630 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:32:27.059642 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:32:27.059654 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:32:27.059667 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:32:27.059679 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:32:27.059692 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:32:27.059705 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:32:27.059717 | orchestrator | 2026-02-28 00:32:27.059728 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2026-02-28 00:32:27.059739 | orchestrator | Saturday 28 February 2026 00:32:20 +0000 (0:00:00.380) 0:05:22.610 ***** 2026-02-28 00:32:27.059750 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:32:27.059761 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:32:27.059772 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:32:27.059782 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:32:27.059807 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:32:27.059818 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:32:27.059829 | orchestrator | ok: [testbed-manager] 2026-02-28 00:32:27.059840 | orchestrator | 2026-02-28 00:32:27.059851 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2026-02-28 00:32:27.059862 | orchestrator | Saturday 28 February 2026 00:32:20 +0000 (0:00:00.328) 0:05:22.939 ***** 2026-02-28 00:32:27.059873 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:32:27.059883 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:32:27.059894 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:32:27.059905 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:32:27.059915 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:32:27.059926 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:32:27.059937 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:32:27.059947 | orchestrator | 2026-02-28 00:32:27.059958 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2026-02-28 00:32:27.059970 | orchestrator | Saturday 28 February 2026 00:32:20 +0000 (0:00:00.272) 0:05:23.212 ***** 2026-02-28 00:32:27.059981 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:32:27.059992 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:32:27.060002 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:32:27.060013 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:32:27.060024 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:32:27.060035 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:32:27.060045 | orchestrator | ok: [testbed-manager] 2026-02-28 00:32:27.060056 | orchestrator | 2026-02-28 00:32:27.060082 | orchestrator | TASK [osism.services.docker : Print used docker version] *********************** 2026-02-28 00:32:27.060094 | orchestrator | Saturday 28 February 2026 00:32:21 +0000 (0:00:00.327) 0:05:23.540 ***** 2026-02-28 00:32:27.060105 | orchestrator | ok: [testbed-node-3] =>  2026-02-28 00:32:27.060116 | orchestrator |  docker_version: 5:27.5.1 2026-02-28 00:32:27.060127 | orchestrator | ok: [testbed-node-4] =>  2026-02-28 00:32:27.060137 | orchestrator |  docker_version: 5:27.5.1 2026-02-28 00:32:27.060148 | orchestrator | ok: [testbed-node-5] =>  2026-02-28 00:32:27.060159 | orchestrator |  docker_version: 5:27.5.1 2026-02-28 00:32:27.060170 | orchestrator | ok: [testbed-node-0] =>  2026-02-28 00:32:27.060181 | orchestrator |  docker_version: 5:27.5.1 2026-02-28 00:32:27.060210 | orchestrator | ok: [testbed-node-1] =>  2026-02-28 00:32:27.060222 | orchestrator |  docker_version: 5:27.5.1 2026-02-28 00:32:27.060233 | orchestrator | ok: [testbed-node-2] =>  2026-02-28 00:32:27.060244 | orchestrator |  docker_version: 5:27.5.1 2026-02-28 00:32:27.060255 | orchestrator | ok: [testbed-manager] =>  2026-02-28 00:32:27.060266 | orchestrator |  docker_version: 5:27.5.1 2026-02-28 00:32:27.060284 | orchestrator | 2026-02-28 00:32:27.060295 | orchestrator | TASK [osism.services.docker : Print used docker cli version] ******************* 2026-02-28 00:32:27.060306 | orchestrator | Saturday 28 February 2026 00:32:21 +0000 (0:00:00.290) 0:05:23.830 ***** 2026-02-28 00:32:27.060317 | orchestrator | ok: [testbed-node-3] =>  2026-02-28 00:32:27.060328 | orchestrator |  docker_cli_version: 5:27.5.1 2026-02-28 00:32:27.060339 | orchestrator | ok: [testbed-node-4] =>  2026-02-28 00:32:27.060350 | orchestrator |  docker_cli_version: 5:27.5.1 2026-02-28 00:32:27.060361 | orchestrator | ok: [testbed-node-5] =>  2026-02-28 00:32:27.060371 | orchestrator |  docker_cli_version: 5:27.5.1 2026-02-28 00:32:27.060382 | orchestrator | ok: [testbed-node-0] =>  2026-02-28 00:32:27.060393 | orchestrator |  docker_cli_version: 5:27.5.1 2026-02-28 00:32:27.060404 | orchestrator | ok: [testbed-node-1] =>  2026-02-28 00:32:27.060414 | orchestrator |  docker_cli_version: 5:27.5.1 2026-02-28 00:32:27.060425 | orchestrator | ok: [testbed-node-2] =>  2026-02-28 00:32:27.060436 | orchestrator |  docker_cli_version: 5:27.5.1 2026-02-28 00:32:27.060447 | orchestrator | ok: [testbed-manager] =>  2026-02-28 00:32:27.060459 | orchestrator |  docker_cli_version: 5:27.5.1 2026-02-28 00:32:27.060479 | orchestrator | 2026-02-28 00:32:27.060498 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2026-02-28 00:32:27.060515 | orchestrator | Saturday 28 February 2026 00:32:21 +0000 (0:00:00.312) 0:05:24.143 ***** 2026-02-28 00:32:27.060534 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:32:27.060551 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:32:27.060568 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:32:27.060585 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:32:27.060603 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:32:27.060620 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:32:27.060637 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:32:27.060653 | orchestrator | 2026-02-28 00:32:27.060670 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2026-02-28 00:32:27.060687 | orchestrator | Saturday 28 February 2026 00:32:21 +0000 (0:00:00.284) 0:05:24.428 ***** 2026-02-28 00:32:27.060704 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:32:27.060721 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:32:27.060739 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:32:27.060758 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:32:27.060776 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:32:27.060794 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:32:27.060813 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:32:27.060834 | orchestrator | 2026-02-28 00:32:27.060852 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2026-02-28 00:32:27.060869 | orchestrator | Saturday 28 February 2026 00:32:22 +0000 (0:00:00.286) 0:05:24.715 ***** 2026-02-28 00:32:27.060890 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2, testbed-manager 2026-02-28 00:32:27.060910 | orchestrator | 2026-02-28 00:32:27.060929 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2026-02-28 00:32:27.060949 | orchestrator | Saturday 28 February 2026 00:32:22 +0000 (0:00:00.577) 0:05:25.293 ***** 2026-02-28 00:32:27.060967 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:32:27.060987 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:32:27.061007 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:32:27.061026 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:32:27.061045 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:32:27.061117 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:32:27.061140 | orchestrator | ok: [testbed-manager] 2026-02-28 00:32:27.061159 | orchestrator | 2026-02-28 00:32:27.061179 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2026-02-28 00:32:27.061209 | orchestrator | Saturday 28 February 2026 00:32:23 +0000 (0:00:00.810) 0:05:26.103 ***** 2026-02-28 00:32:27.061241 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:32:27.061259 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:32:27.061279 | orchestrator | ok: [testbed-manager] 2026-02-28 00:32:27.061297 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:32:27.061316 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:32:27.061335 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:32:27.061354 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:32:27.061373 | orchestrator | 2026-02-28 00:32:27.061392 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2026-02-28 00:32:27.061412 | orchestrator | Saturday 28 February 2026 00:32:26 +0000 (0:00:03.002) 0:05:29.106 ***** 2026-02-28 00:32:27.061432 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2026-02-28 00:32:27.061451 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2026-02-28 00:32:27.061470 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2026-02-28 00:32:27.061488 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2026-02-28 00:32:27.061507 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2026-02-28 00:32:27.061518 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:32:27.061529 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2026-02-28 00:32:27.061540 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2026-02-28 00:32:27.061551 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2026-02-28 00:32:27.061562 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2026-02-28 00:32:27.061573 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:32:27.061583 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2026-02-28 00:32:27.061594 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2026-02-28 00:32:27.061604 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2026-02-28 00:32:27.061615 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:32:27.061626 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2026-02-28 00:32:27.061650 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2026-02-28 00:33:27.332808 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2026-02-28 00:33:27.332909 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:33:27.332923 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2026-02-28 00:33:27.332933 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2026-02-28 00:33:27.332941 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2026-02-28 00:33:27.332949 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:33:27.332957 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:33:27.332965 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2026-02-28 00:33:27.332974 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2026-02-28 00:33:27.332982 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2026-02-28 00:33:27.332990 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:33:27.332998 | orchestrator | 2026-02-28 00:33:27.333008 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2026-02-28 00:33:27.333017 | orchestrator | Saturday 28 February 2026 00:32:27 +0000 (0:00:00.657) 0:05:29.764 ***** 2026-02-28 00:33:27.333025 | orchestrator | ok: [testbed-manager] 2026-02-28 00:33:27.333033 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:33:27.333041 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:33:27.333049 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:33:27.333057 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:33:27.333065 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:33:27.333073 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:33:27.333081 | orchestrator | 2026-02-28 00:33:27.333116 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2026-02-28 00:33:27.333125 | orchestrator | Saturday 28 February 2026 00:32:33 +0000 (0:00:06.655) 0:05:36.419 ***** 2026-02-28 00:33:27.333133 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:33:27.333165 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:33:27.333173 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:33:27.333181 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:33:27.333189 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:33:27.333196 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:33:27.333204 | orchestrator | ok: [testbed-manager] 2026-02-28 00:33:27.333212 | orchestrator | 2026-02-28 00:33:27.333220 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2026-02-28 00:33:27.333228 | orchestrator | Saturday 28 February 2026 00:32:35 +0000 (0:00:01.089) 0:05:37.508 ***** 2026-02-28 00:33:27.333236 | orchestrator | ok: [testbed-manager] 2026-02-28 00:33:27.333243 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:33:27.333251 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:33:27.333259 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:33:27.333267 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:33:27.333275 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:33:27.333282 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:33:27.333291 | orchestrator | 2026-02-28 00:33:27.333299 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2026-02-28 00:33:27.333307 | orchestrator | Saturday 28 February 2026 00:32:43 +0000 (0:00:08.150) 0:05:45.658 ***** 2026-02-28 00:33:27.333315 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:33:27.333323 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:33:27.333330 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:33:27.333338 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:33:27.333346 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:33:27.333355 | orchestrator | changed: [testbed-manager] 2026-02-28 00:33:27.333365 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:33:27.333374 | orchestrator | 2026-02-28 00:33:27.333382 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2026-02-28 00:33:27.333392 | orchestrator | Saturday 28 February 2026 00:32:46 +0000 (0:00:03.316) 0:05:48.975 ***** 2026-02-28 00:33:27.333400 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:33:27.333409 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:33:27.333418 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:33:27.333427 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:33:27.333436 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:33:27.333445 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:33:27.333453 | orchestrator | ok: [testbed-manager] 2026-02-28 00:33:27.333462 | orchestrator | 2026-02-28 00:33:27.333484 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2026-02-28 00:33:27.333493 | orchestrator | Saturday 28 February 2026 00:32:47 +0000 (0:00:01.292) 0:05:50.267 ***** 2026-02-28 00:33:27.333502 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:33:27.333511 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:33:27.333521 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:33:27.333529 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:33:27.333539 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:33:27.333547 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:33:27.333556 | orchestrator | ok: [testbed-manager] 2026-02-28 00:33:27.333565 | orchestrator | 2026-02-28 00:33:27.333574 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2026-02-28 00:33:27.333584 | orchestrator | Saturday 28 February 2026 00:32:49 +0000 (0:00:01.491) 0:05:51.758 ***** 2026-02-28 00:33:27.333593 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:33:27.333602 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:33:27.333611 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:33:27.333619 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:33:27.333627 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:33:27.333635 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:33:27.333643 | orchestrator | changed: [testbed-manager] 2026-02-28 00:33:27.333651 | orchestrator | 2026-02-28 00:33:27.333659 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2026-02-28 00:33:27.333672 | orchestrator | Saturday 28 February 2026 00:32:50 +0000 (0:00:01.022) 0:05:52.781 ***** 2026-02-28 00:33:27.333680 | orchestrator | ok: [testbed-manager] 2026-02-28 00:33:27.333688 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:33:27.333696 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:33:27.333704 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:33:27.333711 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:33:27.333719 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:33:27.333727 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:33:27.333735 | orchestrator | 2026-02-28 00:33:27.333743 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2026-02-28 00:33:27.333764 | orchestrator | Saturday 28 February 2026 00:33:00 +0000 (0:00:09.692) 0:06:02.474 ***** 2026-02-28 00:33:27.333772 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:33:27.333780 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:33:27.333788 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:33:27.333796 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:33:27.333804 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:33:27.333812 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:33:27.333819 | orchestrator | changed: [testbed-manager] 2026-02-28 00:33:27.333827 | orchestrator | 2026-02-28 00:33:27.333835 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2026-02-28 00:33:27.333843 | orchestrator | Saturday 28 February 2026 00:33:00 +0000 (0:00:00.881) 0:06:03.356 ***** 2026-02-28 00:33:27.333851 | orchestrator | ok: [testbed-manager] 2026-02-28 00:33:27.333859 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:33:27.333866 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:33:27.333874 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:33:27.333882 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:33:27.333890 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:33:27.333898 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:33:27.333905 | orchestrator | 2026-02-28 00:33:27.333913 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2026-02-28 00:33:27.333921 | orchestrator | Saturday 28 February 2026 00:33:09 +0000 (0:00:09.002) 0:06:12.359 ***** 2026-02-28 00:33:27.333929 | orchestrator | ok: [testbed-manager] 2026-02-28 00:33:27.333937 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:33:27.333944 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:33:27.333952 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:33:27.333960 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:33:27.333968 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:33:27.333976 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:33:27.333983 | orchestrator | 2026-02-28 00:33:27.333991 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2026-02-28 00:33:27.333999 | orchestrator | Saturday 28 February 2026 00:33:20 +0000 (0:00:10.968) 0:06:23.327 ***** 2026-02-28 00:33:27.334008 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2026-02-28 00:33:27.334066 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2026-02-28 00:33:27.334075 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2026-02-28 00:33:27.334083 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2026-02-28 00:33:27.334117 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2026-02-28 00:33:27.334125 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2026-02-28 00:33:27.334133 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2026-02-28 00:33:27.334141 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2026-02-28 00:33:27.334148 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2026-02-28 00:33:27.334156 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2026-02-28 00:33:27.334164 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2026-02-28 00:33:27.334172 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2026-02-28 00:33:27.334180 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2026-02-28 00:33:27.334188 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2026-02-28 00:33:27.334202 | orchestrator | 2026-02-28 00:33:27.334210 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2026-02-28 00:33:27.334218 | orchestrator | Saturday 28 February 2026 00:33:22 +0000 (0:00:01.182) 0:06:24.509 ***** 2026-02-28 00:33:27.334226 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:33:27.334234 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:33:27.334242 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:33:27.334250 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:33:27.334257 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:33:27.334265 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:33:27.334273 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:33:27.334281 | orchestrator | 2026-02-28 00:33:27.334289 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2026-02-28 00:33:27.334297 | orchestrator | Saturday 28 February 2026 00:33:22 +0000 (0:00:00.539) 0:06:25.048 ***** 2026-02-28 00:33:27.334305 | orchestrator | ok: [testbed-manager] 2026-02-28 00:33:27.334313 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:33:27.334321 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:33:27.334329 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:33:27.334337 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:33:27.334345 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:33:27.334353 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:33:27.334361 | orchestrator | 2026-02-28 00:33:27.334369 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2026-02-28 00:33:27.334378 | orchestrator | Saturday 28 February 2026 00:33:26 +0000 (0:00:03.752) 0:06:28.801 ***** 2026-02-28 00:33:27.334386 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:33:27.334394 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:33:27.334402 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:33:27.334410 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:33:27.334417 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:33:27.334425 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:33:27.334433 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:33:27.334441 | orchestrator | 2026-02-28 00:33:27.334449 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2026-02-28 00:33:27.334458 | orchestrator | Saturday 28 February 2026 00:33:27 +0000 (0:00:00.712) 0:06:29.513 ***** 2026-02-28 00:33:27.334466 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2026-02-28 00:33:27.334473 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2026-02-28 00:33:27.334481 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:33:27.334489 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2026-02-28 00:33:27.334497 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2026-02-28 00:33:27.334505 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:33:27.334513 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2026-02-28 00:33:27.334521 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2026-02-28 00:33:27.334529 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:33:27.334542 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2026-02-28 00:33:46.508511 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2026-02-28 00:33:46.508646 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:33:46.508675 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2026-02-28 00:33:46.508695 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2026-02-28 00:33:46.508712 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:33:46.508723 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2026-02-28 00:33:46.508734 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2026-02-28 00:33:46.508745 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:33:46.508756 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2026-02-28 00:33:46.508793 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2026-02-28 00:33:46.508805 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:33:46.508822 | orchestrator | 2026-02-28 00:33:46.508841 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2026-02-28 00:33:46.508861 | orchestrator | Saturday 28 February 2026 00:33:27 +0000 (0:00:00.592) 0:06:30.106 ***** 2026-02-28 00:33:46.508881 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:33:46.508900 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:33:46.508918 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:33:46.508938 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:33:46.508957 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:33:46.508977 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:33:46.508996 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:33:46.509014 | orchestrator | 2026-02-28 00:33:46.509033 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2026-02-28 00:33:46.509055 | orchestrator | Saturday 28 February 2026 00:33:28 +0000 (0:00:00.547) 0:06:30.654 ***** 2026-02-28 00:33:46.509077 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:33:46.509126 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:33:46.509147 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:33:46.509165 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:33:46.509184 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:33:46.509203 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:33:46.509222 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:33:46.509242 | orchestrator | 2026-02-28 00:33:46.509262 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2026-02-28 00:33:46.509280 | orchestrator | Saturday 28 February 2026 00:33:28 +0000 (0:00:00.527) 0:06:31.181 ***** 2026-02-28 00:33:46.509300 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:33:46.509319 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:33:46.509338 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:33:46.509356 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:33:46.509371 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:33:46.509383 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:33:46.509393 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:33:46.509404 | orchestrator | 2026-02-28 00:33:46.509415 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2026-02-28 00:33:46.509426 | orchestrator | Saturday 28 February 2026 00:33:29 +0000 (0:00:00.616) 0:06:31.798 ***** 2026-02-28 00:33:46.509437 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:33:46.509448 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:33:46.509459 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:33:46.509469 | orchestrator | ok: [testbed-manager] 2026-02-28 00:33:46.509484 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:33:46.509502 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:33:46.509520 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:33:46.509539 | orchestrator | 2026-02-28 00:33:46.509558 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2026-02-28 00:33:46.509576 | orchestrator | Saturday 28 February 2026 00:33:31 +0000 (0:00:01.914) 0:06:33.712 ***** 2026-02-28 00:33:46.509606 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2, testbed-manager 2026-02-28 00:33:46.509630 | orchestrator | 2026-02-28 00:33:46.509650 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2026-02-28 00:33:46.509668 | orchestrator | Saturday 28 February 2026 00:33:32 +0000 (0:00:00.881) 0:06:34.593 ***** 2026-02-28 00:33:46.509686 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:33:46.509706 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:33:46.509728 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:33:46.509748 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:33:46.509768 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:33:46.509803 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:33:46.509822 | orchestrator | ok: [testbed-manager] 2026-02-28 00:33:46.509842 | orchestrator | 2026-02-28 00:33:46.509862 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2026-02-28 00:33:46.509880 | orchestrator | Saturday 28 February 2026 00:33:33 +0000 (0:00:00.886) 0:06:35.479 ***** 2026-02-28 00:33:46.509898 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:33:46.509909 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:33:46.509920 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:33:46.509931 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:33:46.509942 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:33:46.509953 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:33:46.509963 | orchestrator | ok: [testbed-manager] 2026-02-28 00:33:46.509974 | orchestrator | 2026-02-28 00:33:46.509985 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2026-02-28 00:33:46.509996 | orchestrator | Saturday 28 February 2026 00:33:33 +0000 (0:00:00.863) 0:06:36.343 ***** 2026-02-28 00:33:46.510007 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:33:46.510086 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:33:46.510134 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:33:46.510153 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:33:46.510172 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:33:46.510191 | orchestrator | ok: [testbed-manager] 2026-02-28 00:33:46.510209 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:33:46.510226 | orchestrator | 2026-02-28 00:33:46.510238 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2026-02-28 00:33:46.510270 | orchestrator | Saturday 28 February 2026 00:33:35 +0000 (0:00:01.525) 0:06:37.868 ***** 2026-02-28 00:33:46.510282 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:33:46.510293 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:33:46.510304 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:33:46.510314 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:33:46.510325 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:33:46.510336 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:33:46.510347 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:33:46.510357 | orchestrator | 2026-02-28 00:33:46.510368 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2026-02-28 00:33:46.510379 | orchestrator | Saturday 28 February 2026 00:33:36 +0000 (0:00:01.269) 0:06:39.137 ***** 2026-02-28 00:33:46.510390 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:33:46.510401 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:33:46.510411 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:33:46.510422 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:33:46.510433 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:33:46.510443 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:33:46.510454 | orchestrator | ok: [testbed-manager] 2026-02-28 00:33:46.510465 | orchestrator | 2026-02-28 00:33:46.510476 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2026-02-28 00:33:46.510487 | orchestrator | Saturday 28 February 2026 00:33:37 +0000 (0:00:01.284) 0:06:40.421 ***** 2026-02-28 00:33:46.510498 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:33:46.510508 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:33:46.510519 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:33:46.510530 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:33:46.510540 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:33:46.510551 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:33:46.510561 | orchestrator | changed: [testbed-manager] 2026-02-28 00:33:46.510572 | orchestrator | 2026-02-28 00:33:46.510583 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2026-02-28 00:33:46.510594 | orchestrator | Saturday 28 February 2026 00:33:39 +0000 (0:00:01.367) 0:06:41.789 ***** 2026-02-28 00:33:46.510605 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2, testbed-manager 2026-02-28 00:33:46.510632 | orchestrator | 2026-02-28 00:33:46.510643 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2026-02-28 00:33:46.510654 | orchestrator | Saturday 28 February 2026 00:33:40 +0000 (0:00:01.086) 0:06:42.875 ***** 2026-02-28 00:33:46.510665 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:33:46.510676 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:33:46.510686 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:33:46.510697 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:33:46.510708 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:33:46.510718 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:33:46.510729 | orchestrator | ok: [testbed-manager] 2026-02-28 00:33:46.510740 | orchestrator | 2026-02-28 00:33:46.510750 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2026-02-28 00:33:46.510761 | orchestrator | Saturday 28 February 2026 00:33:41 +0000 (0:00:01.416) 0:06:44.292 ***** 2026-02-28 00:33:46.510772 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:33:46.510783 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:33:46.510794 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:33:46.510804 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:33:46.510815 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:33:46.510825 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:33:46.510836 | orchestrator | ok: [testbed-manager] 2026-02-28 00:33:46.510847 | orchestrator | 2026-02-28 00:33:46.510858 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2026-02-28 00:33:46.510868 | orchestrator | Saturday 28 February 2026 00:33:42 +0000 (0:00:01.144) 0:06:45.437 ***** 2026-02-28 00:33:46.510879 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:33:46.510890 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:33:46.510901 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:33:46.510911 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:33:46.510922 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:33:46.510933 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:33:46.510944 | orchestrator | ok: [testbed-manager] 2026-02-28 00:33:46.510954 | orchestrator | 2026-02-28 00:33:46.510965 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2026-02-28 00:33:46.510976 | orchestrator | Saturday 28 February 2026 00:33:44 +0000 (0:00:01.116) 0:06:46.554 ***** 2026-02-28 00:33:46.510987 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:33:46.510998 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:33:46.511008 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:33:46.511019 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:33:46.511030 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:33:46.511040 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:33:46.511051 | orchestrator | ok: [testbed-manager] 2026-02-28 00:33:46.511062 | orchestrator | 2026-02-28 00:33:46.511073 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2026-02-28 00:33:46.511083 | orchestrator | Saturday 28 February 2026 00:33:45 +0000 (0:00:01.376) 0:06:47.930 ***** 2026-02-28 00:33:46.511227 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2, testbed-manager 2026-02-28 00:33:46.511270 | orchestrator | 2026-02-28 00:33:46.511283 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-02-28 00:33:46.511294 | orchestrator | Saturday 28 February 2026 00:33:46 +0000 (0:00:00.897) 0:06:48.827 ***** 2026-02-28 00:33:46.511305 | orchestrator | 2026-02-28 00:33:46.511316 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-02-28 00:33:46.511326 | orchestrator | Saturday 28 February 2026 00:33:46 +0000 (0:00:00.038) 0:06:48.865 ***** 2026-02-28 00:33:46.511337 | orchestrator | 2026-02-28 00:33:46.511348 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-02-28 00:33:46.511359 | orchestrator | Saturday 28 February 2026 00:33:46 +0000 (0:00:00.044) 0:06:48.910 ***** 2026-02-28 00:33:46.511383 | orchestrator | 2026-02-28 00:33:46.511412 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-02-28 00:33:46.511449 | orchestrator | Saturday 28 February 2026 00:33:46 +0000 (0:00:00.039) 0:06:48.949 ***** 2026-02-28 00:34:13.000626 | orchestrator | 2026-02-28 00:34:13.000767 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-02-28 00:34:13.000796 | orchestrator | Saturday 28 February 2026 00:33:46 +0000 (0:00:00.042) 0:06:48.991 ***** 2026-02-28 00:34:13.000815 | orchestrator | 2026-02-28 00:34:13.000832 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-02-28 00:34:13.000852 | orchestrator | Saturday 28 February 2026 00:33:46 +0000 (0:00:00.046) 0:06:49.037 ***** 2026-02-28 00:34:13.000870 | orchestrator | 2026-02-28 00:34:13.000888 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-02-28 00:34:13.000906 | orchestrator | Saturday 28 February 2026 00:33:46 +0000 (0:00:00.047) 0:06:49.085 ***** 2026-02-28 00:34:13.000923 | orchestrator | 2026-02-28 00:34:13.000942 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-02-28 00:34:13.000960 | orchestrator | Saturday 28 February 2026 00:33:46 +0000 (0:00:00.038) 0:06:49.123 ***** 2026-02-28 00:34:13.000981 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:34:13.000994 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:34:13.001005 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:34:13.001017 | orchestrator | 2026-02-28 00:34:13.001029 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2026-02-28 00:34:13.001041 | orchestrator | Saturday 28 February 2026 00:33:47 +0000 (0:00:01.208) 0:06:50.332 ***** 2026-02-28 00:34:13.001052 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:34:13.001065 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:34:13.001076 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:34:13.001087 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:34:13.001098 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:34:13.001145 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:34:13.001158 | orchestrator | changed: [testbed-manager] 2026-02-28 00:34:13.001171 | orchestrator | 2026-02-28 00:34:13.001184 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart logrotate service] *********** 2026-02-28 00:34:13.001197 | orchestrator | Saturday 28 February 2026 00:33:49 +0000 (0:00:01.543) 0:06:51.876 ***** 2026-02-28 00:34:13.001209 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:34:13.001222 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:34:13.001234 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:34:13.001246 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:34:13.001259 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:34:13.001271 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:34:13.001283 | orchestrator | changed: [testbed-manager] 2026-02-28 00:34:13.001296 | orchestrator | 2026-02-28 00:34:13.001309 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2026-02-28 00:34:13.001322 | orchestrator | Saturday 28 February 2026 00:33:50 +0000 (0:00:01.251) 0:06:53.128 ***** 2026-02-28 00:34:13.001334 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:34:13.001347 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:34:13.001360 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:34:13.001372 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:34:13.001384 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:34:13.001401 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:34:13.001422 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:34:13.001442 | orchestrator | 2026-02-28 00:34:13.001461 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2026-02-28 00:34:13.001480 | orchestrator | Saturday 28 February 2026 00:33:53 +0000 (0:00:02.530) 0:06:55.658 ***** 2026-02-28 00:34:13.001499 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:34:13.001516 | orchestrator | 2026-02-28 00:34:13.001533 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2026-02-28 00:34:13.001551 | orchestrator | Saturday 28 February 2026 00:33:53 +0000 (0:00:00.089) 0:06:55.748 ***** 2026-02-28 00:34:13.001609 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:34:13.001629 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:34:13.001647 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:34:13.001665 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:34:13.001685 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:34:13.001704 | orchestrator | ok: [testbed-manager] 2026-02-28 00:34:13.001723 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:34:13.001741 | orchestrator | 2026-02-28 00:34:13.001781 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2026-02-28 00:34:13.001802 | orchestrator | Saturday 28 February 2026 00:33:54 +0000 (0:00:00.994) 0:06:56.743 ***** 2026-02-28 00:34:13.001814 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:34:13.001825 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:34:13.001836 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:34:13.001847 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:34:13.001857 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:34:13.001868 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:34:13.001879 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:34:13.001893 | orchestrator | 2026-02-28 00:34:13.001912 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2026-02-28 00:34:13.001930 | orchestrator | Saturday 28 February 2026 00:33:55 +0000 (0:00:00.767) 0:06:57.510 ***** 2026-02-28 00:34:13.001949 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2, testbed-manager 2026-02-28 00:34:13.001970 | orchestrator | 2026-02-28 00:34:13.001988 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2026-02-28 00:34:13.002008 | orchestrator | Saturday 28 February 2026 00:33:55 +0000 (0:00:00.934) 0:06:58.444 ***** 2026-02-28 00:34:13.002099 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:34:13.002166 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:34:13.002185 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:34:13.002205 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:34:13.002222 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:34:13.002241 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:34:13.002255 | orchestrator | ok: [testbed-manager] 2026-02-28 00:34:13.002266 | orchestrator | 2026-02-28 00:34:13.002277 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2026-02-28 00:34:13.002288 | orchestrator | Saturday 28 February 2026 00:33:56 +0000 (0:00:00.838) 0:06:59.283 ***** 2026-02-28 00:34:13.002299 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2026-02-28 00:34:13.002332 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2026-02-28 00:34:13.002344 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2026-02-28 00:34:13.002355 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2026-02-28 00:34:13.002366 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2026-02-28 00:34:13.002377 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2026-02-28 00:34:13.002387 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2026-02-28 00:34:13.002398 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2026-02-28 00:34:13.002410 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2026-02-28 00:34:13.002420 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2026-02-28 00:34:13.002431 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2026-02-28 00:34:13.002442 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2026-02-28 00:34:13.002453 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2026-02-28 00:34:13.002463 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2026-02-28 00:34:13.002474 | orchestrator | 2026-02-28 00:34:13.002485 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2026-02-28 00:34:13.002511 | orchestrator | Saturday 28 February 2026 00:33:59 +0000 (0:00:02.706) 0:07:01.990 ***** 2026-02-28 00:34:13.002522 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:34:13.002533 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:34:13.002544 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:34:13.002555 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:34:13.002566 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:34:13.002577 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:34:13.002588 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:34:13.002599 | orchestrator | 2026-02-28 00:34:13.002610 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2026-02-28 00:34:13.002621 | orchestrator | Saturday 28 February 2026 00:34:00 +0000 (0:00:00.577) 0:07:02.567 ***** 2026-02-28 00:34:13.002633 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2, testbed-manager 2026-02-28 00:34:13.002647 | orchestrator | 2026-02-28 00:34:13.002658 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2026-02-28 00:34:13.002669 | orchestrator | Saturday 28 February 2026 00:34:01 +0000 (0:00:00.901) 0:07:03.469 ***** 2026-02-28 00:34:13.002680 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:34:13.002690 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:34:13.002701 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:34:13.002712 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:34:13.002723 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:34:13.002734 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:34:13.002744 | orchestrator | ok: [testbed-manager] 2026-02-28 00:34:13.002755 | orchestrator | 2026-02-28 00:34:13.002766 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2026-02-28 00:34:13.002777 | orchestrator | Saturday 28 February 2026 00:34:01 +0000 (0:00:00.932) 0:07:04.402 ***** 2026-02-28 00:34:13.002788 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:34:13.002799 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:34:13.002809 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:34:13.002820 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:34:13.002831 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:34:13.002842 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:34:13.002852 | orchestrator | ok: [testbed-manager] 2026-02-28 00:34:13.002863 | orchestrator | 2026-02-28 00:34:13.002874 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2026-02-28 00:34:13.002885 | orchestrator | Saturday 28 February 2026 00:34:03 +0000 (0:00:01.078) 0:07:05.480 ***** 2026-02-28 00:34:13.002896 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:34:13.002915 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:34:13.002926 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:34:13.002937 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:34:13.002948 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:34:13.002959 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:34:13.002970 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:34:13.002980 | orchestrator | 2026-02-28 00:34:13.002991 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2026-02-28 00:34:13.003002 | orchestrator | Saturday 28 February 2026 00:34:03 +0000 (0:00:00.518) 0:07:05.999 ***** 2026-02-28 00:34:13.003013 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:34:13.003024 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:34:13.003035 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:34:13.003046 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:34:13.003057 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:34:13.003068 | orchestrator | ok: [testbed-manager] 2026-02-28 00:34:13.003079 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:34:13.003090 | orchestrator | 2026-02-28 00:34:13.003101 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2026-02-28 00:34:13.003133 | orchestrator | Saturday 28 February 2026 00:34:05 +0000 (0:00:01.581) 0:07:07.580 ***** 2026-02-28 00:34:13.003151 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:34:13.003161 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:34:13.003172 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:34:13.003183 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:34:13.003194 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:34:13.003205 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:34:13.003215 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:34:13.003226 | orchestrator | 2026-02-28 00:34:13.003237 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2026-02-28 00:34:13.003248 | orchestrator | Saturday 28 February 2026 00:34:05 +0000 (0:00:00.494) 0:07:08.075 ***** 2026-02-28 00:34:13.003259 | orchestrator | ok: [testbed-manager] 2026-02-28 00:34:13.003270 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:34:13.003281 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:34:13.003291 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:34:13.003302 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:34:13.003313 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:34:13.003331 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:34:46.268508 | orchestrator | 2026-02-28 00:34:46.268648 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2026-02-28 00:34:46.268665 | orchestrator | Saturday 28 February 2026 00:34:13 +0000 (0:00:07.420) 0:07:15.495 ***** 2026-02-28 00:34:46.268677 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:34:46.268690 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:34:46.268701 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:34:46.268713 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:34:46.268724 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:34:46.268779 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:34:46.268793 | orchestrator | ok: [testbed-manager] 2026-02-28 00:34:46.268805 | orchestrator | 2026-02-28 00:34:46.268816 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2026-02-28 00:34:46.268828 | orchestrator | Saturday 28 February 2026 00:34:15 +0000 (0:00:02.556) 0:07:18.051 ***** 2026-02-28 00:34:46.268839 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:34:46.268850 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:34:46.268862 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:34:46.268873 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:34:46.268884 | orchestrator | ok: [testbed-manager] 2026-02-28 00:34:46.268895 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:34:46.268906 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:34:46.268917 | orchestrator | 2026-02-28 00:34:46.268928 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2026-02-28 00:34:46.268939 | orchestrator | Saturday 28 February 2026 00:34:17 +0000 (0:00:01.666) 0:07:19.718 ***** 2026-02-28 00:34:46.268950 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:34:46.268961 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:34:46.268972 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:34:46.268983 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:34:46.268994 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:34:46.269005 | orchestrator | ok: [testbed-manager] 2026-02-28 00:34:46.269016 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:34:46.269027 | orchestrator | 2026-02-28 00:34:46.269038 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-02-28 00:34:46.269049 | orchestrator | Saturday 28 February 2026 00:34:18 +0000 (0:00:01.633) 0:07:21.351 ***** 2026-02-28 00:34:46.269060 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:34:46.269071 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:34:46.269082 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:34:46.269093 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:34:46.269104 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:34:46.269138 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:34:46.269149 | orchestrator | ok: [testbed-manager] 2026-02-28 00:34:46.269160 | orchestrator | 2026-02-28 00:34:46.269171 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-02-28 00:34:46.269208 | orchestrator | Saturday 28 February 2026 00:34:19 +0000 (0:00:01.055) 0:07:22.406 ***** 2026-02-28 00:34:46.269220 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:34:46.269230 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:34:46.269241 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:34:46.269252 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:34:46.269264 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:34:46.269275 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:34:46.269285 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:34:46.269296 | orchestrator | 2026-02-28 00:34:46.269307 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2026-02-28 00:34:46.269318 | orchestrator | Saturday 28 February 2026 00:34:20 +0000 (0:00:00.812) 0:07:23.219 ***** 2026-02-28 00:34:46.269329 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:34:46.269339 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:34:46.269350 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:34:46.269361 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:34:46.269372 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:34:46.269382 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:34:46.269393 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:34:46.269404 | orchestrator | 2026-02-28 00:34:46.269415 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2026-02-28 00:34:46.269426 | orchestrator | Saturday 28 February 2026 00:34:21 +0000 (0:00:00.536) 0:07:23.755 ***** 2026-02-28 00:34:46.269437 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:34:46.269447 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:34:46.269458 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:34:46.269469 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:34:46.269480 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:34:46.269491 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:34:46.269501 | orchestrator | ok: [testbed-manager] 2026-02-28 00:34:46.269514 | orchestrator | 2026-02-28 00:34:46.269525 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2026-02-28 00:34:46.269536 | orchestrator | Saturday 28 February 2026 00:34:21 +0000 (0:00:00.511) 0:07:24.266 ***** 2026-02-28 00:34:46.269547 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:34:46.269558 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:34:46.269568 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:34:46.269579 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:34:46.269589 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:34:46.269600 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:34:46.269611 | orchestrator | ok: [testbed-manager] 2026-02-28 00:34:46.269622 | orchestrator | 2026-02-28 00:34:46.269646 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2026-02-28 00:34:46.269668 | orchestrator | Saturday 28 February 2026 00:34:22 +0000 (0:00:00.740) 0:07:25.007 ***** 2026-02-28 00:34:46.269680 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:34:46.269691 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:34:46.269702 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:34:46.269713 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:34:46.269723 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:34:46.269734 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:34:46.269745 | orchestrator | ok: [testbed-manager] 2026-02-28 00:34:46.269756 | orchestrator | 2026-02-28 00:34:46.269767 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2026-02-28 00:34:46.269779 | orchestrator | Saturday 28 February 2026 00:34:23 +0000 (0:00:00.567) 0:07:25.574 ***** 2026-02-28 00:34:46.269789 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:34:46.269800 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:34:46.269811 | orchestrator | ok: [testbed-manager] 2026-02-28 00:34:46.269822 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:34:46.269833 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:34:46.269844 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:34:46.269855 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:34:46.269866 | orchestrator | 2026-02-28 00:34:46.269902 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2026-02-28 00:34:46.269915 | orchestrator | Saturday 28 February 2026 00:34:28 +0000 (0:00:05.605) 0:07:31.180 ***** 2026-02-28 00:34:46.269926 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:34:46.269937 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:34:46.269948 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:34:46.269959 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:34:46.269970 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:34:46.269981 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:34:46.269992 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:34:46.270003 | orchestrator | 2026-02-28 00:34:46.270014 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2026-02-28 00:34:46.270079 | orchestrator | Saturday 28 February 2026 00:34:29 +0000 (0:00:00.531) 0:07:31.711 ***** 2026-02-28 00:34:46.270108 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2, testbed-manager 2026-02-28 00:34:46.270160 | orchestrator | 2026-02-28 00:34:46.270172 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2026-02-28 00:34:46.270183 | orchestrator | Saturday 28 February 2026 00:34:30 +0000 (0:00:01.006) 0:07:32.718 ***** 2026-02-28 00:34:46.270194 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:34:46.270205 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:34:46.270216 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:34:46.270227 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:34:46.270238 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:34:46.270248 | orchestrator | ok: [testbed-manager] 2026-02-28 00:34:46.270259 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:34:46.270270 | orchestrator | 2026-02-28 00:34:46.270281 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2026-02-28 00:34:46.270292 | orchestrator | Saturday 28 February 2026 00:34:32 +0000 (0:00:01.823) 0:07:34.541 ***** 2026-02-28 00:34:46.270303 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:34:46.270314 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:34:46.270325 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:34:46.270336 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:34:46.270347 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:34:46.270358 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:34:46.270369 | orchestrator | ok: [testbed-manager] 2026-02-28 00:34:46.270380 | orchestrator | 2026-02-28 00:34:46.270392 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2026-02-28 00:34:46.270408 | orchestrator | Saturday 28 February 2026 00:34:33 +0000 (0:00:01.154) 0:07:35.696 ***** 2026-02-28 00:34:46.270428 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:34:46.270447 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:34:46.270465 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:34:46.270484 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:34:46.270502 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:34:46.270521 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:34:46.270539 | orchestrator | ok: [testbed-manager] 2026-02-28 00:34:46.270557 | orchestrator | 2026-02-28 00:34:46.270576 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2026-02-28 00:34:46.270596 | orchestrator | Saturday 28 February 2026 00:34:34 +0000 (0:00:00.867) 0:07:36.564 ***** 2026-02-28 00:34:46.270615 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-02-28 00:34:46.270637 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-02-28 00:34:46.270658 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-02-28 00:34:46.270689 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-02-28 00:34:46.270724 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-02-28 00:34:46.270742 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-02-28 00:34:46.270761 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-02-28 00:34:46.270781 | orchestrator | 2026-02-28 00:34:46.270799 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2026-02-28 00:34:46.270817 | orchestrator | Saturday 28 February 2026 00:34:36 +0000 (0:00:01.958) 0:07:38.523 ***** 2026-02-28 00:34:46.270837 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2, testbed-manager 2026-02-28 00:34:46.270858 | orchestrator | 2026-02-28 00:34:46.270877 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2026-02-28 00:34:46.270896 | orchestrator | Saturday 28 February 2026 00:34:36 +0000 (0:00:00.831) 0:07:39.354 ***** 2026-02-28 00:34:46.270914 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:34:46.270932 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:34:46.270952 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:34:46.270972 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:34:46.270991 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:34:46.271010 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:34:46.271031 | orchestrator | changed: [testbed-manager] 2026-02-28 00:34:46.271051 | orchestrator | 2026-02-28 00:34:46.271077 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2026-02-28 00:35:17.693242 | orchestrator | Saturday 28 February 2026 00:34:46 +0000 (0:00:09.357) 0:07:48.711 ***** 2026-02-28 00:35:17.693358 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:35:17.693376 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:35:17.693388 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:35:17.693400 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:35:17.693411 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:35:17.693422 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:35:17.693434 | orchestrator | ok: [testbed-manager] 2026-02-28 00:35:17.693445 | orchestrator | 2026-02-28 00:35:17.693457 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2026-02-28 00:35:17.693469 | orchestrator | Saturday 28 February 2026 00:34:48 +0000 (0:00:02.114) 0:07:50.826 ***** 2026-02-28 00:35:17.693480 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:35:17.693491 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:35:17.693502 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:35:17.693514 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:35:17.693525 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:35:17.693536 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:35:17.693547 | orchestrator | 2026-02-28 00:35:17.693558 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2026-02-28 00:35:17.693570 | orchestrator | Saturday 28 February 2026 00:34:49 +0000 (0:00:01.359) 0:07:52.185 ***** 2026-02-28 00:35:17.693581 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:35:17.693593 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:35:17.693604 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:35:17.693615 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:35:17.693626 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:35:17.693637 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:35:17.693648 | orchestrator | changed: [testbed-manager] 2026-02-28 00:35:17.693659 | orchestrator | 2026-02-28 00:35:17.693673 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2026-02-28 00:35:17.693685 | orchestrator | 2026-02-28 00:35:17.693698 | orchestrator | TASK [Include hardening role] ************************************************** 2026-02-28 00:35:17.693749 | orchestrator | Saturday 28 February 2026 00:34:51 +0000 (0:00:01.362) 0:07:53.547 ***** 2026-02-28 00:35:17.693761 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:35:17.693775 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:35:17.693788 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:35:17.693800 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:35:17.693813 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:35:17.693825 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:35:17.693836 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:35:17.693847 | orchestrator | 2026-02-28 00:35:17.693858 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2026-02-28 00:35:17.693869 | orchestrator | 2026-02-28 00:35:17.693880 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2026-02-28 00:35:17.693891 | orchestrator | Saturday 28 February 2026 00:34:51 +0000 (0:00:00.711) 0:07:54.258 ***** 2026-02-28 00:35:17.693902 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:35:17.693914 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:35:17.693925 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:35:17.693936 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:35:17.693947 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:35:17.693958 | orchestrator | changed: [testbed-manager] 2026-02-28 00:35:17.693969 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:35:17.693980 | orchestrator | 2026-02-28 00:35:17.693991 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2026-02-28 00:35:17.694002 | orchestrator | Saturday 28 February 2026 00:34:53 +0000 (0:00:01.484) 0:07:55.743 ***** 2026-02-28 00:35:17.694088 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:35:17.694102 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:35:17.694113 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:35:17.694149 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:35:17.694161 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:35:17.694172 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:35:17.694183 | orchestrator | ok: [testbed-manager] 2026-02-28 00:35:17.694194 | orchestrator | 2026-02-28 00:35:17.694205 | orchestrator | TASK [Include auditd role] ***************************************************** 2026-02-28 00:35:17.694216 | orchestrator | Saturday 28 February 2026 00:34:54 +0000 (0:00:01.473) 0:07:57.216 ***** 2026-02-28 00:35:17.694242 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:35:17.694254 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:35:17.694264 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:35:17.694275 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:35:17.694286 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:35:17.694297 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:35:17.694308 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:35:17.694319 | orchestrator | 2026-02-28 00:35:17.694330 | orchestrator | TASK [Include smartd role] ***************************************************** 2026-02-28 00:35:17.694341 | orchestrator | Saturday 28 February 2026 00:34:55 +0000 (0:00:00.693) 0:07:57.910 ***** 2026-02-28 00:35:17.694352 | orchestrator | included: osism.services.smartd for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2, testbed-manager 2026-02-28 00:35:17.694365 | orchestrator | 2026-02-28 00:35:17.694376 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2026-02-28 00:35:17.694387 | orchestrator | Saturday 28 February 2026 00:34:56 +0000 (0:00:00.849) 0:07:58.760 ***** 2026-02-28 00:35:17.694399 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2, testbed-manager 2026-02-28 00:35:17.694413 | orchestrator | 2026-02-28 00:35:17.694424 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2026-02-28 00:35:17.694435 | orchestrator | Saturday 28 February 2026 00:34:57 +0000 (0:00:00.810) 0:07:59.571 ***** 2026-02-28 00:35:17.694457 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:35:17.694468 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:35:17.694479 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:35:17.694489 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:35:17.694500 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:35:17.694511 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:35:17.694522 | orchestrator | changed: [testbed-manager] 2026-02-28 00:35:17.694533 | orchestrator | 2026-02-28 00:35:17.694562 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2026-02-28 00:35:17.694574 | orchestrator | Saturday 28 February 2026 00:35:06 +0000 (0:00:08.879) 0:08:08.451 ***** 2026-02-28 00:35:17.694585 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:35:17.694596 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:35:17.694607 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:35:17.694618 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:35:17.694628 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:35:17.694639 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:35:17.694650 | orchestrator | changed: [testbed-manager] 2026-02-28 00:35:17.694661 | orchestrator | 2026-02-28 00:35:17.694672 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2026-02-28 00:35:17.694683 | orchestrator | Saturday 28 February 2026 00:35:06 +0000 (0:00:00.838) 0:08:09.289 ***** 2026-02-28 00:35:17.694694 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:35:17.694705 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:35:17.694716 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:35:17.694727 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:35:17.694737 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:35:17.694748 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:35:17.694759 | orchestrator | changed: [testbed-manager] 2026-02-28 00:35:17.694770 | orchestrator | 2026-02-28 00:35:17.694781 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2026-02-28 00:35:17.694792 | orchestrator | Saturday 28 February 2026 00:35:08 +0000 (0:00:01.284) 0:08:10.573 ***** 2026-02-28 00:35:17.694803 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:35:17.694814 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:35:17.694825 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:35:17.694836 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:35:17.694846 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:35:17.694857 | orchestrator | changed: [testbed-manager] 2026-02-28 00:35:17.694868 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:35:17.694879 | orchestrator | 2026-02-28 00:35:17.694890 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2026-02-28 00:35:17.694901 | orchestrator | Saturday 28 February 2026 00:35:10 +0000 (0:00:02.045) 0:08:12.619 ***** 2026-02-28 00:35:17.694912 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:35:17.694923 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:35:17.694934 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:35:17.694944 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:35:17.694955 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:35:17.694966 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:35:17.694976 | orchestrator | changed: [testbed-manager] 2026-02-28 00:35:17.694987 | orchestrator | 2026-02-28 00:35:17.694998 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2026-02-28 00:35:17.695009 | orchestrator | Saturday 28 February 2026 00:35:11 +0000 (0:00:01.270) 0:08:13.890 ***** 2026-02-28 00:35:17.695020 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:35:17.695031 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:35:17.695041 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:35:17.695052 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:35:17.695063 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:35:17.695074 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:35:17.695084 | orchestrator | changed: [testbed-manager] 2026-02-28 00:35:17.695095 | orchestrator | 2026-02-28 00:35:17.695114 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2026-02-28 00:35:17.695148 | orchestrator | 2026-02-28 00:35:17.695160 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2026-02-28 00:35:17.695171 | orchestrator | Saturday 28 February 2026 00:35:12 +0000 (0:00:01.175) 0:08:15.065 ***** 2026-02-28 00:35:17.695182 | orchestrator | included: osism.commons.state for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2, testbed-manager 2026-02-28 00:35:17.695193 | orchestrator | 2026-02-28 00:35:17.695205 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-02-28 00:35:17.695222 | orchestrator | Saturday 28 February 2026 00:35:13 +0000 (0:00:01.085) 0:08:16.151 ***** 2026-02-28 00:35:17.695233 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:35:17.695244 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:35:17.695255 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:35:17.695267 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:35:17.695277 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:35:17.695288 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:35:17.695299 | orchestrator | ok: [testbed-manager] 2026-02-28 00:35:17.695310 | orchestrator | 2026-02-28 00:35:17.695321 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-02-28 00:35:17.695332 | orchestrator | Saturday 28 February 2026 00:35:14 +0000 (0:00:00.884) 0:08:17.035 ***** 2026-02-28 00:35:17.695343 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:35:17.695355 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:35:17.695366 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:35:17.695377 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:35:17.695388 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:35:17.695399 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:35:17.695410 | orchestrator | changed: [testbed-manager] 2026-02-28 00:35:17.695421 | orchestrator | 2026-02-28 00:35:17.695432 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2026-02-28 00:35:17.695443 | orchestrator | Saturday 28 February 2026 00:35:15 +0000 (0:00:01.201) 0:08:18.237 ***** 2026-02-28 00:35:17.695454 | orchestrator | included: osism.commons.state for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2, testbed-manager 2026-02-28 00:35:17.695465 | orchestrator | 2026-02-28 00:35:17.695476 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-02-28 00:35:17.695487 | orchestrator | Saturday 28 February 2026 00:35:16 +0000 (0:00:01.041) 0:08:19.278 ***** 2026-02-28 00:35:17.695498 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:35:17.695509 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:35:17.695520 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:35:17.695531 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:35:17.695542 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:35:17.695553 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:35:17.695564 | orchestrator | ok: [testbed-manager] 2026-02-28 00:35:17.695575 | orchestrator | 2026-02-28 00:35:17.695593 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-02-28 00:35:19.249791 | orchestrator | Saturday 28 February 2026 00:35:17 +0000 (0:00:00.856) 0:08:20.135 ***** 2026-02-28 00:35:19.249895 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:35:19.249911 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:35:19.249922 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:35:19.249933 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:35:19.249944 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:35:19.249955 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:35:19.249966 | orchestrator | changed: [testbed-manager] 2026-02-28 00:35:19.249977 | orchestrator | 2026-02-28 00:35:19.249989 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-28 00:35:19.250001 | orchestrator | testbed-manager : ok=168  changed=40  unreachable=0 failed=0 skipped=42  rescued=0 ignored=0 2026-02-28 00:35:19.250110 | orchestrator | testbed-node-0 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-02-28 00:35:19.250149 | orchestrator | testbed-node-1 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-02-28 00:35:19.250162 | orchestrator | testbed-node-2 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-02-28 00:35:19.250173 | orchestrator | testbed-node-3 : ok=175  changed=65  unreachable=0 failed=0 skipped=38  rescued=0 ignored=0 2026-02-28 00:35:19.250184 | orchestrator | testbed-node-4 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-02-28 00:35:19.250195 | orchestrator | testbed-node-5 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-02-28 00:35:19.250206 | orchestrator | 2026-02-28 00:35:19.250217 | orchestrator | 2026-02-28 00:35:19.250228 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-28 00:35:19.250239 | orchestrator | Saturday 28 February 2026 00:35:18 +0000 (0:00:01.121) 0:08:21.256 ***** 2026-02-28 00:35:19.250250 | orchestrator | =============================================================================== 2026-02-28 00:35:19.250261 | orchestrator | osism.commons.packages : Install required packages --------------------- 88.57s 2026-02-28 00:35:19.250272 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 34.51s 2026-02-28 00:35:19.250283 | orchestrator | osism.commons.packages : Download required packages -------------------- 33.62s 2026-02-28 00:35:19.250294 | orchestrator | osism.commons.repository : Update package cache ------------------------ 15.96s 2026-02-28 00:35:19.250305 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 13.23s 2026-02-28 00:35:19.250317 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 13.10s 2026-02-28 00:35:19.250328 | orchestrator | osism.services.docker : Install docker package ------------------------- 10.97s 2026-02-28 00:35:19.250338 | orchestrator | osism.services.docker : Install containerd package ---------------------- 9.69s 2026-02-28 00:35:19.250349 | orchestrator | osism.services.lldpd : Install lldpd package ---------------------------- 9.36s 2026-02-28 00:35:19.250360 | orchestrator | osism.services.docker : Install docker-cli package ---------------------- 9.00s 2026-02-28 00:35:19.250386 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 8.88s 2026-02-28 00:35:19.250398 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 8.35s 2026-02-28 00:35:19.250409 | orchestrator | osism.services.rng : Install rng package -------------------------------- 8.34s 2026-02-28 00:35:19.250420 | orchestrator | osism.services.docker : Add repository ---------------------------------- 8.15s 2026-02-28 00:35:19.250431 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 7.87s 2026-02-28 00:35:19.250442 | orchestrator | osism.commons.docker_compose : Install docker-compose-plugin package ---- 7.42s 2026-02-28 00:35:19.250452 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 6.66s 2026-02-28 00:35:19.250463 | orchestrator | osism.commons.cleanup : Remove dependencies that are no longer required --- 6.11s 2026-02-28 00:35:19.250474 | orchestrator | osism.commons.services : Populate service facts ------------------------- 5.86s 2026-02-28 00:35:19.250485 | orchestrator | osism.commons.cleanup : Populate service facts -------------------------- 5.65s 2026-02-28 00:35:19.615523 | orchestrator | + osism apply fail2ban 2026-02-28 00:35:32.576486 | orchestrator | 2026-02-28 00:35:32 | INFO  | Prepare task for execution of fail2ban. 2026-02-28 00:35:32.649528 | orchestrator | 2026-02-28 00:35:32 | INFO  | Task a414db55-99ca-45d3-877e-e164c7471a86 (fail2ban) was prepared for execution. 2026-02-28 00:35:32.649659 | orchestrator | 2026-02-28 00:35:32 | INFO  | It takes a moment until task a414db55-99ca-45d3-877e-e164c7471a86 (fail2ban) has been started and output is visible here. 2026-02-28 00:35:54.484520 | orchestrator | 2026-02-28 00:35:54.484631 | orchestrator | PLAY [Apply role fail2ban] ***************************************************** 2026-02-28 00:35:54.484648 | orchestrator | 2026-02-28 00:35:54.484661 | orchestrator | TASK [osism.services.fail2ban : Include distribution specific install tasks] *** 2026-02-28 00:35:54.484673 | orchestrator | Saturday 28 February 2026 00:35:37 +0000 (0:00:00.277) 0:00:00.277 ***** 2026-02-28 00:35:54.484685 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/fail2ban/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-28 00:35:54.484700 | orchestrator | 2026-02-28 00:35:54.484711 | orchestrator | TASK [osism.services.fail2ban : Install fail2ban package] ********************** 2026-02-28 00:35:54.484722 | orchestrator | Saturday 28 February 2026 00:35:38 +0000 (0:00:01.140) 0:00:01.418 ***** 2026-02-28 00:35:54.484733 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:35:54.484745 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:35:54.484756 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:35:54.484767 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:35:54.484778 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:35:54.484789 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:35:54.484799 | orchestrator | changed: [testbed-manager] 2026-02-28 00:35:54.484810 | orchestrator | 2026-02-28 00:35:54.484821 | orchestrator | TASK [osism.services.fail2ban : Copy configuration files] ********************** 2026-02-28 00:35:54.484832 | orchestrator | Saturday 28 February 2026 00:35:49 +0000 (0:00:11.075) 0:00:12.493 ***** 2026-02-28 00:35:54.484843 | orchestrator | changed: [testbed-manager] 2026-02-28 00:35:54.484854 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:35:54.484865 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:35:54.484876 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:35:54.484887 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:35:54.484898 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:35:54.484909 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:35:54.484920 | orchestrator | 2026-02-28 00:35:54.484931 | orchestrator | TASK [osism.services.fail2ban : Manage fail2ban service] *********************** 2026-02-28 00:35:54.484942 | orchestrator | Saturday 28 February 2026 00:35:50 +0000 (0:00:01.475) 0:00:13.968 ***** 2026-02-28 00:35:54.484953 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:35:54.484965 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:35:54.484976 | orchestrator | ok: [testbed-manager] 2026-02-28 00:35:54.484987 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:35:54.484998 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:35:54.485009 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:35:54.485019 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:35:54.485030 | orchestrator | 2026-02-28 00:35:54.485041 | orchestrator | TASK [osism.services.fail2ban : Reload fail2ban configuration] ***************** 2026-02-28 00:35:54.485053 | orchestrator | Saturday 28 February 2026 00:35:52 +0000 (0:00:01.476) 0:00:15.445 ***** 2026-02-28 00:35:54.485066 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:35:54.485078 | orchestrator | changed: [testbed-manager] 2026-02-28 00:35:54.485091 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:35:54.485104 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:35:54.485116 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:35:54.485128 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:35:54.485168 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:35:54.485181 | orchestrator | 2026-02-28 00:35:54.485194 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-28 00:35:54.485207 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-28 00:35:54.485221 | orchestrator | testbed-node-0 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-28 00:35:54.485260 | orchestrator | testbed-node-1 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-28 00:35:54.485274 | orchestrator | testbed-node-2 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-28 00:35:54.485302 | orchestrator | testbed-node-3 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-28 00:35:54.485314 | orchestrator | testbed-node-4 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-28 00:35:54.485325 | orchestrator | testbed-node-5 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-28 00:35:54.485336 | orchestrator | 2026-02-28 00:35:54.485347 | orchestrator | 2026-02-28 00:35:54.485358 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-28 00:35:54.485369 | orchestrator | Saturday 28 February 2026 00:35:54 +0000 (0:00:01.706) 0:00:17.152 ***** 2026-02-28 00:35:54.485380 | orchestrator | =============================================================================== 2026-02-28 00:35:54.485390 | orchestrator | osism.services.fail2ban : Install fail2ban package --------------------- 11.08s 2026-02-28 00:35:54.485401 | orchestrator | osism.services.fail2ban : Reload fail2ban configuration ----------------- 1.71s 2026-02-28 00:35:54.485412 | orchestrator | osism.services.fail2ban : Manage fail2ban service ----------------------- 1.48s 2026-02-28 00:35:54.485423 | orchestrator | osism.services.fail2ban : Copy configuration files ---------------------- 1.48s 2026-02-28 00:35:54.485434 | orchestrator | osism.services.fail2ban : Include distribution specific install tasks --- 1.14s 2026-02-28 00:35:54.864425 | orchestrator | + [[ -e /etc/redhat-release ]] 2026-02-28 00:35:54.864523 | orchestrator | + osism apply network 2026-02-28 00:36:06.970443 | orchestrator | 2026-02-28 00:36:06 | INFO  | Prepare task for execution of network. 2026-02-28 00:36:07.044754 | orchestrator | 2026-02-28 00:36:07 | INFO  | Task 025fbb5b-1df9-499e-829f-b1ae7dcf4ca3 (network) was prepared for execution. 2026-02-28 00:36:07.044849 | orchestrator | 2026-02-28 00:36:07 | INFO  | It takes a moment until task 025fbb5b-1df9-499e-829f-b1ae7dcf4ca3 (network) has been started and output is visible here. 2026-02-28 00:36:36.639404 | orchestrator | 2026-02-28 00:36:36.639543 | orchestrator | PLAY [Apply role network] ****************************************************** 2026-02-28 00:36:36.639573 | orchestrator | 2026-02-28 00:36:36.639591 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2026-02-28 00:36:36.639609 | orchestrator | Saturday 28 February 2026 00:36:11 +0000 (0:00:00.262) 0:00:00.262 ***** 2026-02-28 00:36:36.639627 | orchestrator | ok: [testbed-manager] 2026-02-28 00:36:36.639647 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:36:36.639666 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:36:36.639685 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:36:36.639704 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:36:36.639723 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:36:36.639741 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:36:36.639760 | orchestrator | 2026-02-28 00:36:36.639779 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2026-02-28 00:36:36.639798 | orchestrator | Saturday 28 February 2026 00:36:12 +0000 (0:00:00.715) 0:00:00.977 ***** 2026-02-28 00:36:36.639819 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-28 00:36:36.639841 | orchestrator | 2026-02-28 00:36:36.639861 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2026-02-28 00:36:36.639873 | orchestrator | Saturday 28 February 2026 00:36:13 +0000 (0:00:01.224) 0:00:02.202 ***** 2026-02-28 00:36:36.639914 | orchestrator | ok: [testbed-manager] 2026-02-28 00:36:36.639928 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:36:36.639941 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:36:36.639954 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:36:36.639967 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:36:36.639979 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:36:36.639991 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:36:36.640003 | orchestrator | 2026-02-28 00:36:36.640016 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2026-02-28 00:36:36.640028 | orchestrator | Saturday 28 February 2026 00:36:15 +0000 (0:00:02.049) 0:00:04.251 ***** 2026-02-28 00:36:36.640041 | orchestrator | ok: [testbed-manager] 2026-02-28 00:36:36.640053 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:36:36.640067 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:36:36.640079 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:36:36.640092 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:36:36.640104 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:36:36.640117 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:36:36.640129 | orchestrator | 2026-02-28 00:36:36.640170 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2026-02-28 00:36:36.640183 | orchestrator | Saturday 28 February 2026 00:36:17 +0000 (0:00:01.809) 0:00:06.061 ***** 2026-02-28 00:36:36.640196 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2026-02-28 00:36:36.640210 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2026-02-28 00:36:36.640222 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2026-02-28 00:36:36.640234 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2026-02-28 00:36:36.640246 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2026-02-28 00:36:36.640259 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2026-02-28 00:36:36.640270 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2026-02-28 00:36:36.640281 | orchestrator | 2026-02-28 00:36:36.640292 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2026-02-28 00:36:36.640303 | orchestrator | Saturday 28 February 2026 00:36:18 +0000 (0:00:01.018) 0:00:07.080 ***** 2026-02-28 00:36:36.640314 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-02-28 00:36:36.640327 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-02-28 00:36:36.640337 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-28 00:36:36.640349 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-28 00:36:36.640360 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-28 00:36:36.640372 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-02-28 00:36:36.640383 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-02-28 00:36:36.640394 | orchestrator | 2026-02-28 00:36:36.640405 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2026-02-28 00:36:36.640416 | orchestrator | Saturday 28 February 2026 00:36:21 +0000 (0:00:03.597) 0:00:10.677 ***** 2026-02-28 00:36:36.640427 | orchestrator | changed: [testbed-manager] 2026-02-28 00:36:36.640438 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:36:36.640449 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:36:36.640460 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:36:36.640470 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:36:36.640481 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:36:36.640492 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:36:36.640503 | orchestrator | 2026-02-28 00:36:36.640514 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2026-02-28 00:36:36.640525 | orchestrator | Saturday 28 February 2026 00:36:23 +0000 (0:00:01.654) 0:00:12.332 ***** 2026-02-28 00:36:36.640536 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-28 00:36:36.640547 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-02-28 00:36:36.640557 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-28 00:36:36.640568 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-02-28 00:36:36.640579 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-28 00:36:36.640590 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-02-28 00:36:36.640609 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-02-28 00:36:36.640620 | orchestrator | 2026-02-28 00:36:36.640631 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2026-02-28 00:36:36.640642 | orchestrator | Saturday 28 February 2026 00:36:25 +0000 (0:00:01.880) 0:00:14.212 ***** 2026-02-28 00:36:36.640653 | orchestrator | ok: [testbed-manager] 2026-02-28 00:36:36.640664 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:36:36.640675 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:36:36.640686 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:36:36.640697 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:36:36.640708 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:36:36.640719 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:36:36.640730 | orchestrator | 2026-02-28 00:36:36.640741 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2026-02-28 00:36:36.640790 | orchestrator | Saturday 28 February 2026 00:36:26 +0000 (0:00:01.194) 0:00:15.407 ***** 2026-02-28 00:36:36.640804 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:36:36.640815 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:36:36.640825 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:36:36.640836 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:36:36.640847 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:36:36.640858 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:36:36.640869 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:36:36.640880 | orchestrator | 2026-02-28 00:36:36.640891 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2026-02-28 00:36:36.640902 | orchestrator | Saturday 28 February 2026 00:36:27 +0000 (0:00:00.676) 0:00:16.083 ***** 2026-02-28 00:36:36.640913 | orchestrator | ok: [testbed-manager] 2026-02-28 00:36:36.640924 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:36:36.640935 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:36:36.640946 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:36:36.640957 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:36:36.640968 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:36:36.640978 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:36:36.640989 | orchestrator | 2026-02-28 00:36:36.641000 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2026-02-28 00:36:36.641011 | orchestrator | Saturday 28 February 2026 00:36:29 +0000 (0:00:02.291) 0:00:18.374 ***** 2026-02-28 00:36:36.641022 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:36:36.641034 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:36:36.641044 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:36:36.641055 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:36:36.641066 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:36:36.641077 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:36:36.641089 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/iptables.sh', 'src': '/opt/configuration/network/iptables.sh'}) 2026-02-28 00:36:36.641101 | orchestrator | 2026-02-28 00:36:36.641112 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2026-02-28 00:36:36.641123 | orchestrator | Saturday 28 February 2026 00:36:30 +0000 (0:00:00.918) 0:00:19.293 ***** 2026-02-28 00:36:36.641134 | orchestrator | ok: [testbed-manager] 2026-02-28 00:36:36.641168 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:36:36.641186 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:36:36.641205 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:36:36.641223 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:36:36.641242 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:36:36.641259 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:36:36.641277 | orchestrator | 2026-02-28 00:36:36.641295 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2026-02-28 00:36:36.641314 | orchestrator | Saturday 28 February 2026 00:36:32 +0000 (0:00:01.671) 0:00:20.965 ***** 2026-02-28 00:36:36.641332 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-28 00:36:36.641365 | orchestrator | 2026-02-28 00:36:36.641385 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-02-28 00:36:36.641403 | orchestrator | Saturday 28 February 2026 00:36:33 +0000 (0:00:01.311) 0:00:22.277 ***** 2026-02-28 00:36:36.641422 | orchestrator | ok: [testbed-manager] 2026-02-28 00:36:36.641441 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:36:36.641460 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:36:36.641478 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:36:36.641497 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:36:36.641515 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:36:36.641532 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:36:36.641543 | orchestrator | 2026-02-28 00:36:36.641554 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2026-02-28 00:36:36.641566 | orchestrator | Saturday 28 February 2026 00:36:34 +0000 (0:00:01.193) 0:00:23.470 ***** 2026-02-28 00:36:36.641584 | orchestrator | ok: [testbed-manager] 2026-02-28 00:36:36.641595 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:36:36.641606 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:36:36.641617 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:36:36.641627 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:36:36.641638 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:36:36.641649 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:36:36.641659 | orchestrator | 2026-02-28 00:36:36.641671 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-02-28 00:36:36.641682 | orchestrator | Saturday 28 February 2026 00:36:35 +0000 (0:00:00.652) 0:00:24.123 ***** 2026-02-28 00:36:36.641693 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2026-02-28 00:36:36.641704 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2026-02-28 00:36:36.641714 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2026-02-28 00:36:36.641725 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2026-02-28 00:36:36.641736 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2026-02-28 00:36:36.641747 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2026-02-28 00:36:36.641758 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2026-02-28 00:36:36.641769 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2026-02-28 00:36:36.641780 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2026-02-28 00:36:36.641791 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2026-02-28 00:36:36.641801 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2026-02-28 00:36:36.641812 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2026-02-28 00:36:36.641823 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2026-02-28 00:36:36.641834 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2026-02-28 00:36:36.641845 | orchestrator | 2026-02-28 00:36:36.641866 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2026-02-28 00:36:52.954552 | orchestrator | Saturday 28 February 2026 00:36:36 +0000 (0:00:01.287) 0:00:25.411 ***** 2026-02-28 00:36:52.954655 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:36:52.954671 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:36:52.954684 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:36:52.954695 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:36:52.954706 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:36:52.954717 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:36:52.954728 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:36:52.954739 | orchestrator | 2026-02-28 00:36:52.954751 | orchestrator | TASK [osism.commons.network : Include vxlan interfaces] ************************ 2026-02-28 00:36:52.954789 | orchestrator | Saturday 28 February 2026 00:36:37 +0000 (0:00:00.638) 0:00:26.049 ***** 2026-02-28 00:36:52.954802 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/vxlan-interfaces.yml for testbed-manager, testbed-node-1, testbed-node-0, testbed-node-2, testbed-node-3, testbed-node-5, testbed-node-4 2026-02-28 00:36:52.954816 | orchestrator | 2026-02-28 00:36:52.954827 | orchestrator | TASK [osism.commons.network : Create systemd networkd netdev files] ************ 2026-02-28 00:36:52.954838 | orchestrator | Saturday 28 February 2026 00:36:42 +0000 (0:00:04.744) 0:00:30.793 ***** 2026-02-28 00:36:52.954851 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2026-02-28 00:36:52.954863 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2026-02-28 00:36:52.954876 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2026-02-28 00:36:52.954887 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2026-02-28 00:36:52.954910 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2026-02-28 00:36:52.954922 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2026-02-28 00:36:52.954944 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2026-02-28 00:36:52.954955 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2026-02-28 00:36:52.954966 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2026-02-28 00:36:52.954978 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2026-02-28 00:36:52.954989 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2026-02-28 00:36:52.955018 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2026-02-28 00:36:52.955037 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2026-02-28 00:36:52.955049 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2026-02-28 00:36:52.955060 | orchestrator | 2026-02-28 00:36:52.955071 | orchestrator | TASK [osism.commons.network : Create systemd networkd network files] *********** 2026-02-28 00:36:52.955082 | orchestrator | Saturday 28 February 2026 00:36:47 +0000 (0:00:05.615) 0:00:36.409 ***** 2026-02-28 00:36:52.955095 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2026-02-28 00:36:52.955108 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2026-02-28 00:36:52.955121 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2026-02-28 00:36:52.955134 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2026-02-28 00:36:52.955173 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2026-02-28 00:36:52.955187 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2026-02-28 00:36:52.955205 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2026-02-28 00:36:52.955218 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2026-02-28 00:36:52.955231 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2026-02-28 00:36:52.955245 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2026-02-28 00:36:52.955257 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2026-02-28 00:36:52.955277 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2026-02-28 00:36:52.955302 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2026-02-28 00:37:06.542435 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2026-02-28 00:37:06.542540 | orchestrator | 2026-02-28 00:37:06.542558 | orchestrator | TASK [osism.commons.network : Include networkd cleanup tasks] ****************** 2026-02-28 00:37:06.542570 | orchestrator | Saturday 28 February 2026 00:36:53 +0000 (0:00:05.443) 0:00:41.852 ***** 2026-02-28 00:37:06.542583 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-networkd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-28 00:37:06.542595 | orchestrator | 2026-02-28 00:37:06.542607 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-02-28 00:37:06.542619 | orchestrator | Saturday 28 February 2026 00:36:54 +0000 (0:00:01.273) 0:00:43.125 ***** 2026-02-28 00:37:06.542630 | orchestrator | ok: [testbed-manager] 2026-02-28 00:37:06.542643 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:37:06.542654 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:37:06.542664 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:37:06.542692 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:37:06.542703 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:37:06.542714 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:37:06.542725 | orchestrator | 2026-02-28 00:37:06.542737 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-02-28 00:37:06.542748 | orchestrator | Saturday 28 February 2026 00:36:55 +0000 (0:00:01.194) 0:00:44.320 ***** 2026-02-28 00:37:06.542759 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.network)  2026-02-28 00:37:06.542771 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.network)  2026-02-28 00:37:06.542782 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-02-28 00:37:06.542793 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-02-28 00:37:06.542804 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:37:06.542816 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.network)  2026-02-28 00:37:06.542826 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.network)  2026-02-28 00:37:06.542837 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-02-28 00:37:06.542848 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-02-28 00:37:06.542859 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:37:06.542870 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.network)  2026-02-28 00:37:06.542881 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.network)  2026-02-28 00:37:06.542892 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-02-28 00:37:06.542903 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-02-28 00:37:06.542914 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:37:06.542968 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.network)  2026-02-28 00:37:06.542985 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.network)  2026-02-28 00:37:06.543004 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-02-28 00:37:06.543023 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-02-28 00:37:06.543042 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:37:06.543063 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.network)  2026-02-28 00:37:06.543084 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.network)  2026-02-28 00:37:06.543103 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-02-28 00:37:06.543117 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-02-28 00:37:06.543128 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:37:06.543140 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.network)  2026-02-28 00:37:06.543177 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.network)  2026-02-28 00:37:06.543189 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-02-28 00:37:06.543200 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-02-28 00:37:06.543211 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:37:06.543222 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.network)  2026-02-28 00:37:06.543233 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.network)  2026-02-28 00:37:06.543244 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-02-28 00:37:06.543255 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-02-28 00:37:06.543266 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:37:06.543276 | orchestrator | 2026-02-28 00:37:06.543287 | orchestrator | TASK [osism.commons.network : Include network extra init] ********************** 2026-02-28 00:37:06.543322 | orchestrator | Saturday 28 February 2026 00:36:56 +0000 (0:00:00.978) 0:00:45.298 ***** 2026-02-28 00:37:06.543342 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/network-extra-init.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-28 00:37:06.543361 | orchestrator | 2026-02-28 00:37:06.543378 | orchestrator | TASK [osism.commons.network : Deploy network-extra-init script] **************** 2026-02-28 00:37:06.543397 | orchestrator | Saturday 28 February 2026 00:36:57 +0000 (0:00:01.282) 0:00:46.581 ***** 2026-02-28 00:37:06.543416 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:37:06.543434 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:37:06.543453 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:37:06.543471 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:37:06.543489 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:37:06.543506 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:37:06.543529 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:37:06.543552 | orchestrator | 2026-02-28 00:37:06.543570 | orchestrator | TASK [osism.commons.network : Deploy network-extra-init systemd service] ******* 2026-02-28 00:37:06.543588 | orchestrator | Saturday 28 February 2026 00:36:58 +0000 (0:00:00.649) 0:00:47.230 ***** 2026-02-28 00:37:06.543608 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:37:06.543627 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:37:06.543646 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:37:06.543659 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:37:06.543670 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:37:06.543682 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:37:06.543693 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:37:06.543717 | orchestrator | 2026-02-28 00:37:06.543729 | orchestrator | TASK [osism.commons.network : Enable and start network-extra-init service] ***** 2026-02-28 00:37:06.543740 | orchestrator | Saturday 28 February 2026 00:36:59 +0000 (0:00:00.886) 0:00:48.117 ***** 2026-02-28 00:37:06.543751 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:37:06.543762 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:37:06.543773 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:37:06.543784 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:37:06.543795 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:37:06.543806 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:37:06.543816 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:37:06.543827 | orchestrator | 2026-02-28 00:37:06.543838 | orchestrator | TASK [osism.commons.network : Disable and stop network-extra-init service] ***** 2026-02-28 00:37:06.543849 | orchestrator | Saturday 28 February 2026 00:36:59 +0000 (0:00:00.660) 0:00:48.777 ***** 2026-02-28 00:37:06.543861 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:37:06.543872 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:37:06.543883 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:37:06.543894 | orchestrator | ok: [testbed-manager] 2026-02-28 00:37:06.543905 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:37:06.543916 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:37:06.543927 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:37:06.543938 | orchestrator | 2026-02-28 00:37:06.543949 | orchestrator | TASK [osism.commons.network : Remove network-extra-init systemd service] ******* 2026-02-28 00:37:06.543960 | orchestrator | Saturday 28 February 2026 00:37:01 +0000 (0:00:01.758) 0:00:50.536 ***** 2026-02-28 00:37:06.543971 | orchestrator | ok: [testbed-manager] 2026-02-28 00:37:06.543982 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:37:06.543993 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:37:06.544004 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:37:06.544015 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:37:06.544025 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:37:06.544036 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:37:06.544047 | orchestrator | 2026-02-28 00:37:06.544066 | orchestrator | TASK [osism.commons.network : Remove network-extra-init script] **************** 2026-02-28 00:37:06.544078 | orchestrator | Saturday 28 February 2026 00:37:02 +0000 (0:00:01.042) 0:00:51.578 ***** 2026-02-28 00:37:06.544089 | orchestrator | ok: [testbed-manager] 2026-02-28 00:37:06.544099 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:37:06.544110 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:37:06.544121 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:37:06.544138 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:37:06.544182 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:37:06.544201 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:37:06.544228 | orchestrator | 2026-02-28 00:37:06.544247 | orchestrator | RUNNING HANDLER [osism.commons.network : Reload systemd-networkd] ************** 2026-02-28 00:37:06.544267 | orchestrator | Saturday 28 February 2026 00:37:05 +0000 (0:00:02.357) 0:00:53.936 ***** 2026-02-28 00:37:06.544285 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:37:06.544304 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:37:06.544324 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:37:06.544342 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:37:06.544363 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:37:06.544381 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:37:06.544400 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:37:06.544415 | orchestrator | 2026-02-28 00:37:06.544427 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2026-02-28 00:37:06.544438 | orchestrator | Saturday 28 February 2026 00:37:05 +0000 (0:00:00.820) 0:00:54.757 ***** 2026-02-28 00:37:06.544449 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:37:06.544460 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:37:06.544471 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:37:06.544482 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:37:06.544493 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:37:06.544504 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:37:06.544526 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:37:06.544537 | orchestrator | 2026-02-28 00:37:06.544548 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-28 00:37:06.544560 | orchestrator | testbed-manager : ok=25  changed=5  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-02-28 00:37:06.544573 | orchestrator | testbed-node-0 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-02-28 00:37:06.544598 | orchestrator | testbed-node-1 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-02-28 00:37:06.930084 | orchestrator | testbed-node-2 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-02-28 00:37:06.930234 | orchestrator | testbed-node-3 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-02-28 00:37:06.930253 | orchestrator | testbed-node-4 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-02-28 00:37:06.930266 | orchestrator | testbed-node-5 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-02-28 00:37:06.930277 | orchestrator | 2026-02-28 00:37:06.930289 | orchestrator | 2026-02-28 00:37:06.930300 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-28 00:37:06.930313 | orchestrator | Saturday 28 February 2026 00:37:06 +0000 (0:00:00.556) 0:00:55.313 ***** 2026-02-28 00:37:06.930324 | orchestrator | =============================================================================== 2026-02-28 00:37:06.930335 | orchestrator | osism.commons.network : Create systemd networkd netdev files ------------ 5.62s 2026-02-28 00:37:06.930346 | orchestrator | osism.commons.network : Create systemd networkd network files ----------- 5.44s 2026-02-28 00:37:06.930357 | orchestrator | osism.commons.network : Include vxlan interfaces ------------------------ 4.74s 2026-02-28 00:37:06.930368 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 3.60s 2026-02-28 00:37:06.930380 | orchestrator | osism.commons.network : Remove network-extra-init script ---------------- 2.36s 2026-02-28 00:37:06.930391 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 2.29s 2026-02-28 00:37:06.930402 | orchestrator | osism.commons.network : Install required packages ----------------------- 2.05s 2026-02-28 00:37:06.930413 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 1.88s 2026-02-28 00:37:06.930424 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.81s 2026-02-28 00:37:06.930435 | orchestrator | osism.commons.network : Disable and stop network-extra-init service ----- 1.76s 2026-02-28 00:37:06.930446 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.67s 2026-02-28 00:37:06.930457 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.65s 2026-02-28 00:37:06.930468 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.31s 2026-02-28 00:37:06.930479 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.29s 2026-02-28 00:37:06.930490 | orchestrator | osism.commons.network : Include network extra init ---------------------- 1.28s 2026-02-28 00:37:06.930501 | orchestrator | osism.commons.network : Include networkd cleanup tasks ------------------ 1.27s 2026-02-28 00:37:06.930512 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 1.22s 2026-02-28 00:37:06.930523 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.19s 2026-02-28 00:37:06.930534 | orchestrator | osism.commons.network : Check if path for interface file exists --------- 1.19s 2026-02-28 00:37:06.930545 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.19s 2026-02-28 00:37:07.269793 | orchestrator | + osism apply wireguard 2026-02-28 00:37:19.254299 | orchestrator | 2026-02-28 00:37:19 | INFO  | Prepare task for execution of wireguard. 2026-02-28 00:37:19.327183 | orchestrator | 2026-02-28 00:37:19 | INFO  | Task cf635cd0-54e6-452e-88b8-b29d00d57abf (wireguard) was prepared for execution. 2026-02-28 00:37:19.327298 | orchestrator | 2026-02-28 00:37:19 | INFO  | It takes a moment until task cf635cd0-54e6-452e-88b8-b29d00d57abf (wireguard) has been started and output is visible here. 2026-02-28 00:37:39.123825 | orchestrator | 2026-02-28 00:37:39.123961 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2026-02-28 00:37:39.123979 | orchestrator | 2026-02-28 00:37:39.123992 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2026-02-28 00:37:39.124004 | orchestrator | Saturday 28 February 2026 00:37:23 +0000 (0:00:00.202) 0:00:00.202 ***** 2026-02-28 00:37:39.124016 | orchestrator | ok: [testbed-manager] 2026-02-28 00:37:39.124028 | orchestrator | 2026-02-28 00:37:39.124053 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2026-02-28 00:37:39.124065 | orchestrator | Saturday 28 February 2026 00:37:24 +0000 (0:00:01.263) 0:00:01.466 ***** 2026-02-28 00:37:39.124076 | orchestrator | changed: [testbed-manager] 2026-02-28 00:37:39.124088 | orchestrator | 2026-02-28 00:37:39.124099 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2026-02-28 00:37:39.124110 | orchestrator | Saturday 28 February 2026 00:37:31 +0000 (0:00:06.604) 0:00:08.071 ***** 2026-02-28 00:37:39.124121 | orchestrator | changed: [testbed-manager] 2026-02-28 00:37:39.124132 | orchestrator | 2026-02-28 00:37:39.124143 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2026-02-28 00:37:39.124199 | orchestrator | Saturday 28 February 2026 00:37:31 +0000 (0:00:00.552) 0:00:08.623 ***** 2026-02-28 00:37:39.124212 | orchestrator | changed: [testbed-manager] 2026-02-28 00:37:39.124223 | orchestrator | 2026-02-28 00:37:39.124234 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2026-02-28 00:37:39.124245 | orchestrator | Saturday 28 February 2026 00:37:32 +0000 (0:00:00.426) 0:00:09.050 ***** 2026-02-28 00:37:39.124256 | orchestrator | ok: [testbed-manager] 2026-02-28 00:37:39.124267 | orchestrator | 2026-02-28 00:37:39.124278 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2026-02-28 00:37:39.124289 | orchestrator | Saturday 28 February 2026 00:37:33 +0000 (0:00:00.670) 0:00:09.721 ***** 2026-02-28 00:37:39.124300 | orchestrator | ok: [testbed-manager] 2026-02-28 00:37:39.124311 | orchestrator | 2026-02-28 00:37:39.124321 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2026-02-28 00:37:39.124332 | orchestrator | Saturday 28 February 2026 00:37:33 +0000 (0:00:00.427) 0:00:10.148 ***** 2026-02-28 00:37:39.124343 | orchestrator | ok: [testbed-manager] 2026-02-28 00:37:39.124354 | orchestrator | 2026-02-28 00:37:39.124366 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2026-02-28 00:37:39.124379 | orchestrator | Saturday 28 February 2026 00:37:33 +0000 (0:00:00.412) 0:00:10.561 ***** 2026-02-28 00:37:39.124392 | orchestrator | changed: [testbed-manager] 2026-02-28 00:37:39.124404 | orchestrator | 2026-02-28 00:37:39.124419 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2026-02-28 00:37:39.124437 | orchestrator | Saturday 28 February 2026 00:37:35 +0000 (0:00:01.218) 0:00:11.779 ***** 2026-02-28 00:37:39.124456 | orchestrator | changed: [testbed-manager] => (item=None) 2026-02-28 00:37:39.124477 | orchestrator | changed: [testbed-manager] 2026-02-28 00:37:39.124490 | orchestrator | 2026-02-28 00:37:39.124503 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2026-02-28 00:37:39.124516 | orchestrator | Saturday 28 February 2026 00:37:36 +0000 (0:00:00.990) 0:00:12.770 ***** 2026-02-28 00:37:39.124528 | orchestrator | changed: [testbed-manager] 2026-02-28 00:37:39.124541 | orchestrator | 2026-02-28 00:37:39.124553 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2026-02-28 00:37:39.124595 | orchestrator | Saturday 28 February 2026 00:37:37 +0000 (0:00:01.685) 0:00:14.455 ***** 2026-02-28 00:37:39.124607 | orchestrator | changed: [testbed-manager] 2026-02-28 00:37:39.124617 | orchestrator | 2026-02-28 00:37:39.124628 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-28 00:37:39.124640 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-28 00:37:39.124652 | orchestrator | 2026-02-28 00:37:39.124663 | orchestrator | 2026-02-28 00:37:39.124674 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-28 00:37:39.124685 | orchestrator | Saturday 28 February 2026 00:37:38 +0000 (0:00:00.973) 0:00:15.429 ***** 2026-02-28 00:37:39.124695 | orchestrator | =============================================================================== 2026-02-28 00:37:39.124706 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 6.61s 2026-02-28 00:37:39.124717 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 1.69s 2026-02-28 00:37:39.124728 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.26s 2026-02-28 00:37:39.124739 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.22s 2026-02-28 00:37:39.124750 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 0.99s 2026-02-28 00:37:39.124779 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 0.97s 2026-02-28 00:37:39.124791 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.67s 2026-02-28 00:37:39.124802 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.55s 2026-02-28 00:37:39.124818 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.43s 2026-02-28 00:37:39.124830 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.43s 2026-02-28 00:37:39.124841 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.41s 2026-02-28 00:37:39.508552 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2026-02-28 00:37:39.544962 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2026-02-28 00:37:39.545085 | orchestrator | Dload Upload Total Spent Left Speed 2026-02-28 00:37:39.622979 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 15 100 15 0 0 193 0 --:--:-- --:--:-- --:--:-- 194 2026-02-28 00:37:39.633570 | orchestrator | + osism apply --environment custom workarounds 2026-02-28 00:37:41.677550 | orchestrator | 2026-02-28 00:37:41 | INFO  | Trying to run play workarounds in environment custom 2026-02-28 00:37:51.799017 | orchestrator | 2026-02-28 00:37:51 | INFO  | Prepare task for execution of workarounds. 2026-02-28 00:37:51.886013 | orchestrator | 2026-02-28 00:37:51 | INFO  | Task b6fc1f99-eec0-4176-95f8-289f595b0c74 (workarounds) was prepared for execution. 2026-02-28 00:37:51.886251 | orchestrator | 2026-02-28 00:37:51 | INFO  | It takes a moment until task b6fc1f99-eec0-4176-95f8-289f595b0c74 (workarounds) has been started and output is visible here. 2026-02-28 00:38:17.155094 | orchestrator | 2026-02-28 00:38:17.155253 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-28 00:38:17.155270 | orchestrator | 2026-02-28 00:38:17.155282 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2026-02-28 00:38:17.155294 | orchestrator | Saturday 28 February 2026 00:37:55 +0000 (0:00:00.130) 0:00:00.130 ***** 2026-02-28 00:38:17.155306 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2026-02-28 00:38:17.155318 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2026-02-28 00:38:17.155329 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2026-02-28 00:38:17.155340 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2026-02-28 00:38:17.155380 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2026-02-28 00:38:17.155392 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2026-02-28 00:38:17.155404 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2026-02-28 00:38:17.155414 | orchestrator | 2026-02-28 00:38:17.155425 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2026-02-28 00:38:17.155436 | orchestrator | 2026-02-28 00:38:17.155447 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-02-28 00:38:17.155458 | orchestrator | Saturday 28 February 2026 00:37:56 +0000 (0:00:00.724) 0:00:00.855 ***** 2026-02-28 00:38:17.155469 | orchestrator | ok: [testbed-manager] 2026-02-28 00:38:17.155481 | orchestrator | 2026-02-28 00:38:17.155492 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2026-02-28 00:38:17.155502 | orchestrator | 2026-02-28 00:38:17.155513 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-02-28 00:38:17.155524 | orchestrator | Saturday 28 February 2026 00:37:58 +0000 (0:00:02.418) 0:00:03.273 ***** 2026-02-28 00:38:17.155534 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:38:17.155545 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:38:17.155556 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:38:17.155566 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:38:17.155577 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:38:17.155587 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:38:17.155598 | orchestrator | 2026-02-28 00:38:17.155609 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2026-02-28 00:38:17.155619 | orchestrator | 2026-02-28 00:38:17.155630 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2026-02-28 00:38:17.155643 | orchestrator | Saturday 28 February 2026 00:38:00 +0000 (0:00:01.923) 0:00:05.196 ***** 2026-02-28 00:38:17.155655 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-02-28 00:38:17.155669 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-02-28 00:38:17.155681 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-02-28 00:38:17.155694 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-02-28 00:38:17.155706 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-02-28 00:38:17.155719 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-02-28 00:38:17.155731 | orchestrator | 2026-02-28 00:38:17.155743 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2026-02-28 00:38:17.155756 | orchestrator | Saturday 28 February 2026 00:38:02 +0000 (0:00:01.526) 0:00:06.722 ***** 2026-02-28 00:38:17.155768 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:38:17.155780 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:38:17.155792 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:38:17.155805 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:38:17.155817 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:38:17.155829 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:38:17.155841 | orchestrator | 2026-02-28 00:38:17.155853 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2026-02-28 00:38:17.155865 | orchestrator | Saturday 28 February 2026 00:38:06 +0000 (0:00:03.854) 0:00:10.577 ***** 2026-02-28 00:38:17.155893 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:38:17.155905 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:38:17.155918 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:38:17.155930 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:38:17.155942 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:38:17.155955 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:38:17.155975 | orchestrator | 2026-02-28 00:38:17.155988 | orchestrator | PLAY [Add a workaround service] ************************************************ 2026-02-28 00:38:17.155998 | orchestrator | 2026-02-28 00:38:17.156009 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2026-02-28 00:38:17.156020 | orchestrator | Saturday 28 February 2026 00:38:06 +0000 (0:00:00.707) 0:00:11.284 ***** 2026-02-28 00:38:17.156031 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:38:17.156042 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:38:17.156052 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:38:17.156063 | orchestrator | changed: [testbed-manager] 2026-02-28 00:38:17.156073 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:38:17.156084 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:38:17.156094 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:38:17.156105 | orchestrator | 2026-02-28 00:38:17.156116 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2026-02-28 00:38:17.156127 | orchestrator | Saturday 28 February 2026 00:38:08 +0000 (0:00:01.657) 0:00:12.941 ***** 2026-02-28 00:38:17.156138 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:38:17.156148 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:38:17.156184 | orchestrator | changed: [testbed-manager] 2026-02-28 00:38:17.156195 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:38:17.156206 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:38:17.156217 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:38:17.156246 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:38:17.156258 | orchestrator | 2026-02-28 00:38:17.156269 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2026-02-28 00:38:17.156280 | orchestrator | Saturday 28 February 2026 00:38:10 +0000 (0:00:01.680) 0:00:14.622 ***** 2026-02-28 00:38:17.156290 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:38:17.156301 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:38:17.156312 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:38:17.156322 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:38:17.156333 | orchestrator | ok: [testbed-manager] 2026-02-28 00:38:17.156344 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:38:17.156354 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:38:17.156365 | orchestrator | 2026-02-28 00:38:17.156376 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2026-02-28 00:38:17.156387 | orchestrator | Saturday 28 February 2026 00:38:11 +0000 (0:00:01.530) 0:00:16.152 ***** 2026-02-28 00:38:17.156398 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:38:17.156409 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:38:17.156419 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:38:17.156430 | orchestrator | changed: [testbed-manager] 2026-02-28 00:38:17.156441 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:38:17.156451 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:38:17.156462 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:38:17.156473 | orchestrator | 2026-02-28 00:38:17.156483 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2026-02-28 00:38:17.156494 | orchestrator | Saturday 28 February 2026 00:38:13 +0000 (0:00:01.858) 0:00:18.011 ***** 2026-02-28 00:38:17.156505 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:38:17.156515 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:38:17.156526 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:38:17.156536 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:38:17.156547 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:38:17.156557 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:38:17.156568 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:38:17.156579 | orchestrator | 2026-02-28 00:38:17.156589 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2026-02-28 00:38:17.156600 | orchestrator | 2026-02-28 00:38:17.156611 | orchestrator | TASK [Install python3-docker] ************************************************** 2026-02-28 00:38:17.156622 | orchestrator | Saturday 28 February 2026 00:38:14 +0000 (0:00:00.632) 0:00:18.643 ***** 2026-02-28 00:38:17.156632 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:38:17.156651 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:38:17.156661 | orchestrator | ok: [testbed-manager] 2026-02-28 00:38:17.156672 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:38:17.156683 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:38:17.156694 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:38:17.156704 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:38:17.156715 | orchestrator | 2026-02-28 00:38:17.156726 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-28 00:38:17.156738 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-28 00:38:17.156750 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-28 00:38:17.156761 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-28 00:38:17.156772 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-28 00:38:17.156783 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-28 00:38:17.156794 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-28 00:38:17.156804 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-28 00:38:17.156815 | orchestrator | 2026-02-28 00:38:17.156826 | orchestrator | 2026-02-28 00:38:17.156842 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-28 00:38:17.156854 | orchestrator | Saturday 28 February 2026 00:38:17 +0000 (0:00:02.878) 0:00:21.522 ***** 2026-02-28 00:38:17.156864 | orchestrator | =============================================================================== 2026-02-28 00:38:17.156875 | orchestrator | Run update-ca-certificates ---------------------------------------------- 3.85s 2026-02-28 00:38:17.156886 | orchestrator | Install python3-docker -------------------------------------------------- 2.88s 2026-02-28 00:38:17.156897 | orchestrator | Apply netplan configuration --------------------------------------------- 2.42s 2026-02-28 00:38:17.156908 | orchestrator | Apply netplan configuration --------------------------------------------- 1.92s 2026-02-28 00:38:17.156919 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 1.86s 2026-02-28 00:38:17.156930 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.68s 2026-02-28 00:38:17.156940 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.66s 2026-02-28 00:38:17.156951 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.53s 2026-02-28 00:38:17.156962 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.53s 2026-02-28 00:38:17.156973 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.72s 2026-02-28 00:38:17.156984 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.71s 2026-02-28 00:38:17.157001 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.63s 2026-02-28 00:38:17.815116 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2026-02-28 00:38:29.944536 | orchestrator | 2026-02-28 00:38:29 | INFO  | Prepare task for execution of reboot. 2026-02-28 00:38:30.011410 | orchestrator | 2026-02-28 00:38:30 | INFO  | Task b0e7271f-a342-45dc-ac48-2e2fc0878a09 (reboot) was prepared for execution. 2026-02-28 00:38:30.011528 | orchestrator | 2026-02-28 00:38:30 | INFO  | It takes a moment until task b0e7271f-a342-45dc-ac48-2e2fc0878a09 (reboot) has been started and output is visible here. 2026-02-28 00:38:40.261954 | orchestrator | 2026-02-28 00:38:40.262002 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-02-28 00:38:40.262008 | orchestrator | 2026-02-28 00:38:40.262037 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-02-28 00:38:40.262042 | orchestrator | Saturday 28 February 2026 00:38:34 +0000 (0:00:00.219) 0:00:00.220 ***** 2026-02-28 00:38:40.262046 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:38:40.262050 | orchestrator | 2026-02-28 00:38:40.262054 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-02-28 00:38:40.262058 | orchestrator | Saturday 28 February 2026 00:38:34 +0000 (0:00:00.103) 0:00:00.323 ***** 2026-02-28 00:38:40.262062 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:38:40.262066 | orchestrator | 2026-02-28 00:38:40.262070 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-02-28 00:38:40.262074 | orchestrator | Saturday 28 February 2026 00:38:35 +0000 (0:00:00.951) 0:00:01.274 ***** 2026-02-28 00:38:40.262078 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:38:40.262082 | orchestrator | 2026-02-28 00:38:40.262085 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-02-28 00:38:40.262089 | orchestrator | 2026-02-28 00:38:40.262093 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-02-28 00:38:40.262097 | orchestrator | Saturday 28 February 2026 00:38:35 +0000 (0:00:00.116) 0:00:01.391 ***** 2026-02-28 00:38:40.262101 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:38:40.262105 | orchestrator | 2026-02-28 00:38:40.262108 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-02-28 00:38:40.262112 | orchestrator | Saturday 28 February 2026 00:38:35 +0000 (0:00:00.113) 0:00:01.505 ***** 2026-02-28 00:38:40.262116 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:38:40.262120 | orchestrator | 2026-02-28 00:38:40.262124 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-02-28 00:38:40.262128 | orchestrator | Saturday 28 February 2026 00:38:36 +0000 (0:00:00.670) 0:00:02.176 ***** 2026-02-28 00:38:40.262131 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:38:40.262135 | orchestrator | 2026-02-28 00:38:40.262148 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-02-28 00:38:40.262152 | orchestrator | 2026-02-28 00:38:40.262156 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-02-28 00:38:40.262160 | orchestrator | Saturday 28 February 2026 00:38:36 +0000 (0:00:00.112) 0:00:02.289 ***** 2026-02-28 00:38:40.262164 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:38:40.262168 | orchestrator | 2026-02-28 00:38:40.262172 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-02-28 00:38:40.262175 | orchestrator | Saturday 28 February 2026 00:38:36 +0000 (0:00:00.236) 0:00:02.525 ***** 2026-02-28 00:38:40.262179 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:38:40.262183 | orchestrator | 2026-02-28 00:38:40.262187 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-02-28 00:38:40.262191 | orchestrator | Saturday 28 February 2026 00:38:37 +0000 (0:00:00.698) 0:00:03.224 ***** 2026-02-28 00:38:40.262194 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:38:40.262198 | orchestrator | 2026-02-28 00:38:40.262202 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-02-28 00:38:40.262206 | orchestrator | 2026-02-28 00:38:40.262210 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-02-28 00:38:40.262213 | orchestrator | Saturday 28 February 2026 00:38:37 +0000 (0:00:00.112) 0:00:03.336 ***** 2026-02-28 00:38:40.262217 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:38:40.262221 | orchestrator | 2026-02-28 00:38:40.262225 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-02-28 00:38:40.262237 | orchestrator | Saturday 28 February 2026 00:38:37 +0000 (0:00:00.119) 0:00:03.455 ***** 2026-02-28 00:38:40.262241 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:38:40.262256 | orchestrator | 2026-02-28 00:38:40.262260 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-02-28 00:38:40.262264 | orchestrator | Saturday 28 February 2026 00:38:38 +0000 (0:00:00.691) 0:00:04.146 ***** 2026-02-28 00:38:40.262268 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:38:40.262272 | orchestrator | 2026-02-28 00:38:40.262276 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-02-28 00:38:40.262280 | orchestrator | 2026-02-28 00:38:40.262284 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-02-28 00:38:40.262287 | orchestrator | Saturday 28 February 2026 00:38:38 +0000 (0:00:00.132) 0:00:04.279 ***** 2026-02-28 00:38:40.262291 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:38:40.262295 | orchestrator | 2026-02-28 00:38:40.262299 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-02-28 00:38:40.262303 | orchestrator | Saturday 28 February 2026 00:38:38 +0000 (0:00:00.100) 0:00:04.379 ***** 2026-02-28 00:38:40.262306 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:38:40.262310 | orchestrator | 2026-02-28 00:38:40.262314 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-02-28 00:38:40.262318 | orchestrator | Saturday 28 February 2026 00:38:39 +0000 (0:00:00.572) 0:00:04.951 ***** 2026-02-28 00:38:40.262322 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:38:40.262325 | orchestrator | 2026-02-28 00:38:40.262329 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-02-28 00:38:40.262333 | orchestrator | 2026-02-28 00:38:40.262337 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-02-28 00:38:40.262341 | orchestrator | Saturday 28 February 2026 00:38:39 +0000 (0:00:00.100) 0:00:05.052 ***** 2026-02-28 00:38:40.262345 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:38:40.262348 | orchestrator | 2026-02-28 00:38:40.262352 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-02-28 00:38:40.262356 | orchestrator | Saturday 28 February 2026 00:38:39 +0000 (0:00:00.094) 0:00:05.147 ***** 2026-02-28 00:38:40.262360 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:38:40.262364 | orchestrator | 2026-02-28 00:38:40.262368 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-02-28 00:38:40.262371 | orchestrator | Saturday 28 February 2026 00:38:39 +0000 (0:00:00.607) 0:00:05.754 ***** 2026-02-28 00:38:40.262382 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:38:40.262386 | orchestrator | 2026-02-28 00:38:40.262389 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-28 00:38:40.262394 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-28 00:38:40.262398 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-28 00:38:40.262402 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-28 00:38:40.262406 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-28 00:38:40.262410 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-28 00:38:40.262414 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-28 00:38:40.262418 | orchestrator | 2026-02-28 00:38:40.262421 | orchestrator | 2026-02-28 00:38:40.262425 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-28 00:38:40.262429 | orchestrator | Saturday 28 February 2026 00:38:40 +0000 (0:00:00.034) 0:00:05.788 ***** 2026-02-28 00:38:40.262433 | orchestrator | =============================================================================== 2026-02-28 00:38:40.262440 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 4.19s 2026-02-28 00:38:40.262444 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.77s 2026-02-28 00:38:40.262447 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.61s 2026-02-28 00:38:40.479001 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2026-02-28 00:38:52.380599 | orchestrator | 2026-02-28 00:38:52 | INFO  | Prepare task for execution of wait-for-connection. 2026-02-28 00:38:52.461187 | orchestrator | 2026-02-28 00:38:52 | INFO  | Task bcb8e041-5222-45c9-85a7-981950145ffc (wait-for-connection) was prepared for execution. 2026-02-28 00:38:52.461287 | orchestrator | 2026-02-28 00:38:52 | INFO  | It takes a moment until task bcb8e041-5222-45c9-85a7-981950145ffc (wait-for-connection) has been started and output is visible here. 2026-02-28 00:39:09.207971 | orchestrator | 2026-02-28 00:39:09.208150 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2026-02-28 00:39:09.208179 | orchestrator | 2026-02-28 00:39:09.208198 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2026-02-28 00:39:09.208217 | orchestrator | Saturday 28 February 2026 00:38:57 +0000 (0:00:00.266) 0:00:00.266 ***** 2026-02-28 00:39:09.208237 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:39:09.208258 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:39:09.208277 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:39:09.208297 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:39:09.208342 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:39:09.208363 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:39:09.208384 | orchestrator | 2026-02-28 00:39:09.208405 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-28 00:39:09.208426 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-28 00:39:09.208446 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-28 00:39:09.208468 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-28 00:39:09.208488 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-28 00:39:09.208509 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-28 00:39:09.208530 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-28 00:39:09.208551 | orchestrator | 2026-02-28 00:39:09.208573 | orchestrator | 2026-02-28 00:39:09.208593 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-28 00:39:09.208614 | orchestrator | Saturday 28 February 2026 00:39:08 +0000 (0:00:11.589) 0:00:11.856 ***** 2026-02-28 00:39:09.208634 | orchestrator | =============================================================================== 2026-02-28 00:39:09.208656 | orchestrator | Wait until remote system is reachable ---------------------------------- 11.59s 2026-02-28 00:39:09.558377 | orchestrator | + osism apply hddtemp 2026-02-28 00:39:21.690358 | orchestrator | 2026-02-28 00:39:21 | INFO  | Prepare task for execution of hddtemp. 2026-02-28 00:39:21.766690 | orchestrator | 2026-02-28 00:39:21 | INFO  | Task c96738d1-4d4d-410a-a44f-f3794396a50d (hddtemp) was prepared for execution. 2026-02-28 00:39:21.766791 | orchestrator | 2026-02-28 00:39:21 | INFO  | It takes a moment until task c96738d1-4d4d-410a-a44f-f3794396a50d (hddtemp) has been started and output is visible here. 2026-02-28 00:39:49.690210 | orchestrator | 2026-02-28 00:39:49.690302 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2026-02-28 00:39:49.690341 | orchestrator | 2026-02-28 00:39:49.690355 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2026-02-28 00:39:49.690366 | orchestrator | Saturday 28 February 2026 00:39:26 +0000 (0:00:00.250) 0:00:00.250 ***** 2026-02-28 00:39:49.690378 | orchestrator | ok: [testbed-manager] 2026-02-28 00:39:49.690390 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:39:49.690402 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:39:49.690414 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:39:49.690425 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:39:49.690436 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:39:49.690447 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:39:49.690458 | orchestrator | 2026-02-28 00:39:49.690470 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2026-02-28 00:39:49.690481 | orchestrator | Saturday 28 February 2026 00:39:26 +0000 (0:00:00.725) 0:00:00.976 ***** 2026-02-28 00:39:49.690494 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-28 00:39:49.690507 | orchestrator | 2026-02-28 00:39:49.690519 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2026-02-28 00:39:49.690531 | orchestrator | Saturday 28 February 2026 00:39:27 +0000 (0:00:01.171) 0:00:02.148 ***** 2026-02-28 00:39:49.690542 | orchestrator | ok: [testbed-manager] 2026-02-28 00:39:49.690553 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:39:49.690565 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:39:49.690576 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:39:49.690587 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:39:49.690598 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:39:49.690609 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:39:49.690621 | orchestrator | 2026-02-28 00:39:49.690632 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2026-02-28 00:39:49.690643 | orchestrator | Saturday 28 February 2026 00:39:29 +0000 (0:00:01.896) 0:00:04.044 ***** 2026-02-28 00:39:49.690655 | orchestrator | changed: [testbed-manager] 2026-02-28 00:39:49.690667 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:39:49.690678 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:39:49.690689 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:39:49.690700 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:39:49.690712 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:39:49.690723 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:39:49.690734 | orchestrator | 2026-02-28 00:39:49.690746 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2026-02-28 00:39:49.690757 | orchestrator | Saturday 28 February 2026 00:39:31 +0000 (0:00:01.318) 0:00:05.362 ***** 2026-02-28 00:39:49.690768 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:39:49.690779 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:39:49.690791 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:39:49.690802 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:39:49.690813 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:39:49.690845 | orchestrator | ok: [testbed-manager] 2026-02-28 00:39:49.690856 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:39:49.690867 | orchestrator | 2026-02-28 00:39:49.690878 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2026-02-28 00:39:49.690890 | orchestrator | Saturday 28 February 2026 00:39:33 +0000 (0:00:01.865) 0:00:07.228 ***** 2026-02-28 00:39:49.690901 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:39:49.690912 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:39:49.690935 | orchestrator | changed: [testbed-manager] 2026-02-28 00:39:49.690946 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:39:49.690958 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:39:49.690969 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:39:49.690979 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:39:49.690990 | orchestrator | 2026-02-28 00:39:49.691002 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2026-02-28 00:39:49.691020 | orchestrator | Saturday 28 February 2026 00:39:33 +0000 (0:00:00.878) 0:00:08.106 ***** 2026-02-28 00:39:49.691032 | orchestrator | changed: [testbed-manager] 2026-02-28 00:39:49.691044 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:39:49.691055 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:39:49.691067 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:39:49.691078 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:39:49.691089 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:39:49.691123 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:39:49.691134 | orchestrator | 2026-02-28 00:39:49.691146 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2026-02-28 00:39:49.691158 | orchestrator | Saturday 28 February 2026 00:39:46 +0000 (0:00:12.818) 0:00:20.924 ***** 2026-02-28 00:39:49.691169 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-28 00:39:49.691181 | orchestrator | 2026-02-28 00:39:49.691193 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2026-02-28 00:39:49.691204 | orchestrator | Saturday 28 February 2026 00:39:47 +0000 (0:00:01.080) 0:00:22.005 ***** 2026-02-28 00:39:49.691215 | orchestrator | changed: [testbed-manager] 2026-02-28 00:39:49.691226 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:39:49.691238 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:39:49.691249 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:39:49.691260 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:39:49.691271 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:39:49.691282 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:39:49.691294 | orchestrator | 2026-02-28 00:39:49.691308 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-28 00:39:49.691328 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-28 00:39:49.691374 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-28 00:39:49.691400 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-28 00:39:49.691419 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-28 00:39:49.691437 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-28 00:39:49.691455 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-28 00:39:49.691474 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-28 00:39:49.691493 | orchestrator | 2026-02-28 00:39:49.691513 | orchestrator | 2026-02-28 00:39:49.691532 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-28 00:39:49.691552 | orchestrator | Saturday 28 February 2026 00:39:49 +0000 (0:00:01.671) 0:00:23.676 ***** 2026-02-28 00:39:49.691572 | orchestrator | =============================================================================== 2026-02-28 00:39:49.691585 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 12.82s 2026-02-28 00:39:49.691596 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 1.90s 2026-02-28 00:39:49.691607 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 1.87s 2026-02-28 00:39:49.691629 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 1.67s 2026-02-28 00:39:49.691640 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 1.32s 2026-02-28 00:39:49.691651 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 1.17s 2026-02-28 00:39:49.691662 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.08s 2026-02-28 00:39:49.691673 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.88s 2026-02-28 00:39:49.691684 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.73s 2026-02-28 00:39:49.914904 | orchestrator | ++ semver latest 7.1.1 2026-02-28 00:39:49.970968 | orchestrator | + [[ -1 -ge 0 ]] 2026-02-28 00:39:49.971045 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-02-28 00:39:49.971060 | orchestrator | + sudo systemctl restart manager.service 2026-02-28 00:40:03.072081 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-02-28 00:40:03.072212 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-02-28 00:40:03.072228 | orchestrator | + local max_attempts=60 2026-02-28 00:40:03.072242 | orchestrator | + local name=ceph-ansible 2026-02-28 00:40:03.072253 | orchestrator | + local attempt_num=1 2026-02-28 00:40:03.072265 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-28 00:40:03.106904 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-02-28 00:40:03.107004 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-28 00:40:03.107022 | orchestrator | + sleep 5 2026-02-28 00:40:08.111755 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-28 00:40:08.188485 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-02-28 00:40:08.188555 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-28 00:40:08.188564 | orchestrator | + sleep 5 2026-02-28 00:40:13.191340 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-28 00:40:13.226639 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-02-28 00:40:13.226713 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-28 00:40:13.226726 | orchestrator | + sleep 5 2026-02-28 00:40:18.230232 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-28 00:40:18.257759 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-02-28 00:40:18.257845 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-28 00:40:18.257859 | orchestrator | + sleep 5 2026-02-28 00:40:23.262116 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-28 00:40:23.303170 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-02-28 00:40:23.303259 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-28 00:40:23.303273 | orchestrator | + sleep 5 2026-02-28 00:40:28.307007 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-28 00:40:28.346355 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-02-28 00:40:28.346445 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-28 00:40:28.346459 | orchestrator | + sleep 5 2026-02-28 00:40:33.351785 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-28 00:40:33.389922 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-02-28 00:40:33.390140 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-28 00:40:33.390160 | orchestrator | + sleep 5 2026-02-28 00:40:38.394746 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-28 00:40:38.455919 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-02-28 00:40:38.456011 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-28 00:40:38.456022 | orchestrator | + sleep 5 2026-02-28 00:40:43.461344 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-28 00:40:43.486350 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-02-28 00:40:43.486436 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-28 00:40:43.486450 | orchestrator | + sleep 5 2026-02-28 00:40:48.488448 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-28 00:40:48.533453 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-02-28 00:40:48.533571 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-28 00:40:48.533589 | orchestrator | + sleep 5 2026-02-28 00:40:53.537560 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-28 00:40:53.578732 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-02-28 00:40:53.578823 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-28 00:40:53.578868 | orchestrator | + sleep 5 2026-02-28 00:40:58.584824 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-28 00:40:58.621400 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-02-28 00:40:58.621493 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-28 00:40:58.621507 | orchestrator | + sleep 5 2026-02-28 00:41:03.626456 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-28 00:41:03.661780 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-02-28 00:41:03.661905 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-28 00:41:03.661931 | orchestrator | + sleep 5 2026-02-28 00:41:08.666114 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-28 00:41:08.702531 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-02-28 00:41:08.702609 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-02-28 00:41:08.702623 | orchestrator | + local max_attempts=60 2026-02-28 00:41:08.702635 | orchestrator | + local name=kolla-ansible 2026-02-28 00:41:08.702646 | orchestrator | + local attempt_num=1 2026-02-28 00:41:08.703511 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-02-28 00:41:08.732341 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-02-28 00:41:08.732436 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-02-28 00:41:08.732459 | orchestrator | + local max_attempts=60 2026-02-28 00:41:08.732579 | orchestrator | + local name=osism-ansible 2026-02-28 00:41:08.732593 | orchestrator | + local attempt_num=1 2026-02-28 00:41:08.732614 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-02-28 00:41:08.761814 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-02-28 00:41:08.761898 | orchestrator | + [[ true == \t\r\u\e ]] 2026-02-28 00:41:08.762159 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-02-28 00:41:08.913050 | orchestrator | ARA in ceph-ansible already disabled. 2026-02-28 00:41:09.050740 | orchestrator | ARA in kolla-ansible already disabled. 2026-02-28 00:41:09.207653 | orchestrator | ARA in osism-ansible already disabled. 2026-02-28 00:41:09.368259 | orchestrator | ARA in osism-kubernetes already disabled. 2026-02-28 00:41:09.369213 | orchestrator | + osism apply gather-facts 2026-02-28 00:41:21.274610 | orchestrator | 2026-02-28 00:41:21 | INFO  | Prepare task for execution of gather-facts. 2026-02-28 00:41:21.336120 | orchestrator | 2026-02-28 00:41:21 | INFO  | Task 61af92d0-884e-4659-a3c1-3c43c838f7eb (gather-facts) was prepared for execution. 2026-02-28 00:41:21.336200 | orchestrator | 2026-02-28 00:41:21 | INFO  | It takes a moment until task 61af92d0-884e-4659-a3c1-3c43c838f7eb (gather-facts) has been started and output is visible here. 2026-02-28 00:41:33.915911 | orchestrator | 2026-02-28 00:41:33.916021 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-02-28 00:41:33.916037 | orchestrator | 2026-02-28 00:41:33.916050 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-02-28 00:41:33.916125 | orchestrator | Saturday 28 February 2026 00:41:25 +0000 (0:00:00.197) 0:00:00.197 ***** 2026-02-28 00:41:33.916137 | orchestrator | ok: [testbed-manager] 2026-02-28 00:41:33.916150 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:41:33.916161 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:41:33.916173 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:41:33.916184 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:41:33.916195 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:41:33.916206 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:41:33.916217 | orchestrator | 2026-02-28 00:41:33.916229 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-02-28 00:41:33.916240 | orchestrator | 2026-02-28 00:41:33.916251 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-02-28 00:41:33.916262 | orchestrator | Saturday 28 February 2026 00:41:33 +0000 (0:00:07.786) 0:00:07.984 ***** 2026-02-28 00:41:33.916273 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:41:33.916285 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:41:33.916297 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:41:33.916308 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:41:33.916319 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:41:33.916357 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:41:33.916370 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:41:33.916381 | orchestrator | 2026-02-28 00:41:33.916392 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-28 00:41:33.916404 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-28 00:41:33.916416 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-28 00:41:33.916429 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-28 00:41:33.916448 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-28 00:41:33.916466 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-28 00:41:33.916495 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-28 00:41:33.916515 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-28 00:41:33.916533 | orchestrator | 2026-02-28 00:41:33.916552 | orchestrator | 2026-02-28 00:41:33.916570 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-28 00:41:33.916587 | orchestrator | Saturday 28 February 2026 00:41:33 +0000 (0:00:00.515) 0:00:08.500 ***** 2026-02-28 00:41:33.916605 | orchestrator | =============================================================================== 2026-02-28 00:41:33.916646 | orchestrator | Gathers facts about hosts ----------------------------------------------- 7.79s 2026-02-28 00:41:33.916667 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.52s 2026-02-28 00:41:34.304102 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2026-02-28 00:41:34.315038 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2026-02-28 00:41:34.328936 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2026-02-28 00:41:34.347379 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2026-02-28 00:41:34.365642 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2026-02-28 00:41:34.383185 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/320-openstack-minimal.sh /usr/local/bin/deploy-openstack-minimal 2026-02-28 00:41:34.399117 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2026-02-28 00:41:34.422002 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2026-02-28 00:41:34.427132 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2026-02-28 00:41:34.437734 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade-manager.sh /usr/local/bin/upgrade-manager 2026-02-28 00:41:34.447961 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2026-02-28 00:41:34.458415 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2026-02-28 00:41:34.487512 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2026-02-28 00:41:34.500860 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2026-02-28 00:41:34.524637 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/320-openstack-minimal.sh /usr/local/bin/upgrade-openstack-minimal 2026-02-28 00:41:34.548744 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2026-02-28 00:41:34.568848 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2026-02-28 00:41:34.585859 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2026-02-28 00:41:34.603494 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2026-02-28 00:41:34.617809 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh /usr/local/bin/bootstrap-octavia 2026-02-28 00:41:34.631377 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2026-02-28 00:41:34.644089 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2026-02-28 00:41:34.654366 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2026-02-28 00:41:34.670252 | orchestrator | + [[ false == \t\r\u\e ]] 2026-02-28 00:41:35.032361 | orchestrator | ok: Runtime: 0:25:34.105283 2026-02-28 00:41:35.149496 | 2026-02-28 00:41:35.149643 | TASK [Deploy services] 2026-02-28 00:41:35.683163 | orchestrator | skipping: Conditional result was False 2026-02-28 00:41:35.700890 | 2026-02-28 00:41:35.701065 | TASK [Deploy in a nutshell] 2026-02-28 00:41:36.418287 | orchestrator | + set -e 2026-02-28 00:41:36.418515 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-28 00:41:36.418541 | orchestrator | ++ export INTERACTIVE=false 2026-02-28 00:41:36.418563 | orchestrator | ++ INTERACTIVE=false 2026-02-28 00:41:36.418578 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-28 00:41:36.418591 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-28 00:41:36.418605 | orchestrator | + source /opt/manager-vars.sh 2026-02-28 00:41:36.418649 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-02-28 00:41:36.418678 | orchestrator | ++ NUMBER_OF_NODES=6 2026-02-28 00:41:36.418693 | orchestrator | ++ export CEPH_VERSION=reef 2026-02-28 00:41:36.418710 | orchestrator | ++ CEPH_VERSION=reef 2026-02-28 00:41:36.418722 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-02-28 00:41:36.418740 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-02-28 00:41:36.418752 | orchestrator | ++ export MANAGER_VERSION=latest 2026-02-28 00:41:36.418772 | orchestrator | ++ MANAGER_VERSION=latest 2026-02-28 00:41:36.418783 | orchestrator | ++ export OPENSTACK_VERSION=2025.1 2026-02-28 00:41:36.418797 | orchestrator | ++ OPENSTACK_VERSION=2025.1 2026-02-28 00:41:36.418808 | orchestrator | ++ export ARA=false 2026-02-28 00:41:36.418820 | orchestrator | ++ ARA=false 2026-02-28 00:41:36.418831 | orchestrator | ++ export DEPLOY_MODE=manager 2026-02-28 00:41:36.418843 | orchestrator | ++ DEPLOY_MODE=manager 2026-02-28 00:41:36.418854 | orchestrator | ++ export TEMPEST=true 2026-02-28 00:41:36.418865 | orchestrator | ++ TEMPEST=true 2026-02-28 00:41:36.418875 | orchestrator | ++ export IS_ZUUL=true 2026-02-28 00:41:36.418886 | orchestrator | ++ IS_ZUUL=true 2026-02-28 00:41:36.418898 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.157 2026-02-28 00:41:36.418909 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.157 2026-02-28 00:41:36.418935 | orchestrator | ++ export EXTERNAL_API=false 2026-02-28 00:41:36.418947 | orchestrator | ++ EXTERNAL_API=false 2026-02-28 00:41:36.418958 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-02-28 00:41:36.418969 | orchestrator | ++ IMAGE_USER=ubuntu 2026-02-28 00:41:36.418980 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-02-28 00:41:36.418990 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-02-28 00:41:36.419001 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-02-28 00:41:36.419012 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-02-28 00:41:36.419024 | orchestrator | + echo 2026-02-28 00:41:36.419035 | orchestrator | 2026-02-28 00:41:36.419046 | orchestrator | # PULL IMAGES 2026-02-28 00:41:36.419133 | orchestrator | 2026-02-28 00:41:36.419146 | orchestrator | + echo '# PULL IMAGES' 2026-02-28 00:41:36.419158 | orchestrator | + echo 2026-02-28 00:41:36.420272 | orchestrator | ++ semver latest 7.0.0 2026-02-28 00:41:36.482202 | orchestrator | + [[ -1 -ge 0 ]] 2026-02-28 00:41:36.482309 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-02-28 00:41:36.482349 | orchestrator | + osism apply --no-wait -r 2 -e custom pull-images 2026-02-28 00:41:38.493563 | orchestrator | 2026-02-28 00:41:38 | INFO  | Trying to run play pull-images in environment custom 2026-02-28 00:41:48.515758 | orchestrator | 2026-02-28 00:41:48 | INFO  | Prepare task for execution of pull-images. 2026-02-28 00:41:48.587173 | orchestrator | 2026-02-28 00:41:48 | INFO  | Task e5c2842b-ca7f-4727-b1e0-4eea7b792267 (pull-images) was prepared for execution. 2026-02-28 00:41:48.587287 | orchestrator | 2026-02-28 00:41:48 | INFO  | Task e5c2842b-ca7f-4727-b1e0-4eea7b792267 is running in background. No more output. Check ARA for logs. 2026-02-28 00:41:51.077942 | orchestrator | 2026-02-28 00:41:51 | INFO  | Trying to run play wipe-partitions in environment custom 2026-02-28 00:42:01.133755 | orchestrator | 2026-02-28 00:42:01 | INFO  | Prepare task for execution of wipe-partitions. 2026-02-28 00:42:01.208100 | orchestrator | 2026-02-28 00:42:01 | INFO  | Task 978a1f2a-7fea-4432-a597-3e4e3d81976d (wipe-partitions) was prepared for execution. 2026-02-28 00:42:01.208197 | orchestrator | 2026-02-28 00:42:01 | INFO  | It takes a moment until task 978a1f2a-7fea-4432-a597-3e4e3d81976d (wipe-partitions) has been started and output is visible here. 2026-02-28 00:42:16.130638 | orchestrator | 2026-02-28 00:42:16.130745 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2026-02-28 00:42:16.130760 | orchestrator | 2026-02-28 00:42:16.130770 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2026-02-28 00:42:16.130784 | orchestrator | Saturday 28 February 2026 00:42:06 +0000 (0:00:00.139) 0:00:00.140 ***** 2026-02-28 00:42:16.130820 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:42:16.130832 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:42:16.130841 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:42:16.130850 | orchestrator | 2026-02-28 00:42:16.130859 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2026-02-28 00:42:16.130868 | orchestrator | Saturday 28 February 2026 00:42:07 +0000 (0:00:00.576) 0:00:00.716 ***** 2026-02-28 00:42:16.130881 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:42:16.130890 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:42:16.130900 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:42:16.130909 | orchestrator | 2026-02-28 00:42:16.130918 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2026-02-28 00:42:16.130927 | orchestrator | Saturday 28 February 2026 00:42:07 +0000 (0:00:00.380) 0:00:01.097 ***** 2026-02-28 00:42:16.130936 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:42:16.130946 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:42:16.130955 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:42:16.130964 | orchestrator | 2026-02-28 00:42:16.130972 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2026-02-28 00:42:16.130982 | orchestrator | Saturday 28 February 2026 00:42:08 +0000 (0:00:00.651) 0:00:01.748 ***** 2026-02-28 00:42:16.130991 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:42:16.130999 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:42:16.131008 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:42:16.131017 | orchestrator | 2026-02-28 00:42:16.131026 | orchestrator | TASK [Check device availability] *********************************************** 2026-02-28 00:42:16.131035 | orchestrator | Saturday 28 February 2026 00:42:08 +0000 (0:00:00.254) 0:00:02.003 ***** 2026-02-28 00:42:16.131103 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-02-28 00:42:16.131124 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-02-28 00:42:16.131137 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-02-28 00:42:16.131152 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-02-28 00:42:16.131161 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-02-28 00:42:16.131172 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-02-28 00:42:16.131183 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-02-28 00:42:16.131193 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-02-28 00:42:16.131203 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-02-28 00:42:16.131214 | orchestrator | 2026-02-28 00:42:16.131224 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2026-02-28 00:42:16.131234 | orchestrator | Saturday 28 February 2026 00:42:09 +0000 (0:00:01.258) 0:00:03.261 ***** 2026-02-28 00:42:16.131245 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2026-02-28 00:42:16.131255 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2026-02-28 00:42:16.131265 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2026-02-28 00:42:16.131275 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2026-02-28 00:42:16.131285 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2026-02-28 00:42:16.131295 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2026-02-28 00:42:16.131305 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2026-02-28 00:42:16.131314 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2026-02-28 00:42:16.131324 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2026-02-28 00:42:16.131335 | orchestrator | 2026-02-28 00:42:16.131349 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2026-02-28 00:42:16.131358 | orchestrator | Saturday 28 February 2026 00:42:11 +0000 (0:00:01.585) 0:00:04.847 ***** 2026-02-28 00:42:16.131367 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-02-28 00:42:16.131376 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-02-28 00:42:16.131385 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-02-28 00:42:16.131393 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-02-28 00:42:16.131410 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-02-28 00:42:16.131419 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-02-28 00:42:16.131428 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-02-28 00:42:16.131437 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-02-28 00:42:16.131446 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-02-28 00:42:16.131455 | orchestrator | 2026-02-28 00:42:16.131463 | orchestrator | TASK [Reload udev rules] ******************************************************* 2026-02-28 00:42:16.131472 | orchestrator | Saturday 28 February 2026 00:42:14 +0000 (0:00:03.105) 0:00:07.953 ***** 2026-02-28 00:42:16.131481 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:42:16.131490 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:42:16.131499 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:42:16.131507 | orchestrator | 2026-02-28 00:42:16.131516 | orchestrator | TASK [Request device events from the kernel] *********************************** 2026-02-28 00:42:16.131525 | orchestrator | Saturday 28 February 2026 00:42:15 +0000 (0:00:00.630) 0:00:08.583 ***** 2026-02-28 00:42:16.131534 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:42:16.131543 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:42:16.131551 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:42:16.131561 | orchestrator | 2026-02-28 00:42:16.131570 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-28 00:42:16.131580 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-28 00:42:16.131590 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-28 00:42:16.131615 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-28 00:42:16.131624 | orchestrator | 2026-02-28 00:42:16.131635 | orchestrator | 2026-02-28 00:42:16.131646 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-28 00:42:16.131657 | orchestrator | Saturday 28 February 2026 00:42:15 +0000 (0:00:00.668) 0:00:09.251 ***** 2026-02-28 00:42:16.131668 | orchestrator | =============================================================================== 2026-02-28 00:42:16.131679 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 3.11s 2026-02-28 00:42:16.131690 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.59s 2026-02-28 00:42:16.131701 | orchestrator | Check device availability ----------------------------------------------- 1.26s 2026-02-28 00:42:16.131712 | orchestrator | Request device events from the kernel ----------------------------------- 0.67s 2026-02-28 00:42:16.131723 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.65s 2026-02-28 00:42:16.131734 | orchestrator | Reload udev rules ------------------------------------------------------- 0.63s 2026-02-28 00:42:16.131745 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 0.58s 2026-02-28 00:42:16.131756 | orchestrator | Remove all rook related logical devices --------------------------------- 0.38s 2026-02-28 00:42:16.131767 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.25s 2026-02-28 00:42:28.615732 | orchestrator | 2026-02-28 00:42:28 | INFO  | Prepare task for execution of facts. 2026-02-28 00:42:28.695203 | orchestrator | 2026-02-28 00:42:28 | INFO  | Task 19899742-d183-4018-b8f6-caea869676d3 (facts) was prepared for execution. 2026-02-28 00:42:28.695286 | orchestrator | 2026-02-28 00:42:28 | INFO  | It takes a moment until task 19899742-d183-4018-b8f6-caea869676d3 (facts) has been started and output is visible here. 2026-02-28 00:42:42.113795 | orchestrator | 2026-02-28 00:42:42.113921 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-02-28 00:42:42.113940 | orchestrator | 2026-02-28 00:42:42.113983 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-02-28 00:42:42.113995 | orchestrator | Saturday 28 February 2026 00:42:32 +0000 (0:00:00.293) 0:00:00.293 ***** 2026-02-28 00:42:42.114007 | orchestrator | ok: [testbed-manager] 2026-02-28 00:42:42.114164 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:42:42.114179 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:42:42.114190 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:42:42.114201 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:42:42.114212 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:42:42.114223 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:42:42.114234 | orchestrator | 2026-02-28 00:42:42.114245 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-02-28 00:42:42.114257 | orchestrator | Saturday 28 February 2026 00:42:33 +0000 (0:00:01.174) 0:00:01.467 ***** 2026-02-28 00:42:42.114268 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:42:42.114280 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:42:42.114291 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:42:42.114302 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:42:42.114313 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:42:42.114324 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:42:42.114338 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:42:42.114350 | orchestrator | 2026-02-28 00:42:42.114363 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-02-28 00:42:42.114394 | orchestrator | 2026-02-28 00:42:42.114407 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-02-28 00:42:42.114422 | orchestrator | Saturday 28 February 2026 00:42:35 +0000 (0:00:01.371) 0:00:02.838 ***** 2026-02-28 00:42:42.114435 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:42:42.114448 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:42:42.114487 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:42:42.114500 | orchestrator | ok: [testbed-manager] 2026-02-28 00:42:42.114512 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:42:42.114526 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:42:42.114547 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:42:42.114563 | orchestrator | 2026-02-28 00:42:42.114576 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-02-28 00:42:42.114588 | orchestrator | 2026-02-28 00:42:42.114601 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-02-28 00:42:42.114614 | orchestrator | Saturday 28 February 2026 00:42:41 +0000 (0:00:05.824) 0:00:08.663 ***** 2026-02-28 00:42:42.114630 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:42:42.114650 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:42:42.114664 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:42:42.114677 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:42:42.114689 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:42:42.114700 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:42:42.114711 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:42:42.114722 | orchestrator | 2026-02-28 00:42:42.114733 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-28 00:42:42.114746 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-28 00:42:42.114766 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-28 00:42:42.114779 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-28 00:42:42.114790 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-28 00:42:42.114801 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-28 00:42:42.114824 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-28 00:42:42.114835 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-28 00:42:42.114847 | orchestrator | 2026-02-28 00:42:42.114858 | orchestrator | 2026-02-28 00:42:42.114869 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-28 00:42:42.114880 | orchestrator | Saturday 28 February 2026 00:42:41 +0000 (0:00:00.550) 0:00:09.214 ***** 2026-02-28 00:42:42.114892 | orchestrator | =============================================================================== 2026-02-28 00:42:42.114903 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.82s 2026-02-28 00:42:42.114915 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.37s 2026-02-28 00:42:42.114926 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.17s 2026-02-28 00:42:42.114937 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.55s 2026-02-28 00:42:44.629969 | orchestrator | 2026-02-28 00:42:44 | INFO  | Prepare task for execution of ceph-configure-lvm-volumes. 2026-02-28 00:42:44.702783 | orchestrator | 2026-02-28 00:42:44 | INFO  | Task 5cb32e24-1780-43db-a39e-1bb85084dd6b (ceph-configure-lvm-volumes) was prepared for execution. 2026-02-28 00:42:44.702883 | orchestrator | 2026-02-28 00:42:44 | INFO  | It takes a moment until task 5cb32e24-1780-43db-a39e-1bb85084dd6b (ceph-configure-lvm-volumes) has been started and output is visible here. 2026-02-28 00:42:55.707978 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-02-28 00:42:55.708121 | orchestrator | 2.16.14 2026-02-28 00:42:55.708139 | orchestrator | 2026-02-28 00:42:55.708153 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-02-28 00:42:55.708165 | orchestrator | 2026-02-28 00:42:55.708177 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-02-28 00:42:55.708188 | orchestrator | Saturday 28 February 2026 00:42:48 +0000 (0:00:00.308) 0:00:00.308 ***** 2026-02-28 00:42:55.708200 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-02-28 00:42:55.708212 | orchestrator | 2026-02-28 00:42:55.708223 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-02-28 00:42:55.708235 | orchestrator | Saturday 28 February 2026 00:42:49 +0000 (0:00:00.237) 0:00:00.545 ***** 2026-02-28 00:42:55.708247 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:42:55.708259 | orchestrator | 2026-02-28 00:42:55.708270 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:42:55.708281 | orchestrator | Saturday 28 February 2026 00:42:49 +0000 (0:00:00.227) 0:00:00.772 ***** 2026-02-28 00:42:55.708302 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-02-28 00:42:55.708314 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-02-28 00:42:55.708325 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-02-28 00:42:55.708336 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-02-28 00:42:55.708347 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-02-28 00:42:55.708358 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-02-28 00:42:55.708369 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-02-28 00:42:55.708380 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-02-28 00:42:55.708391 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-02-28 00:42:55.708403 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-02-28 00:42:55.708432 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-02-28 00:42:55.708444 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-02-28 00:42:55.708455 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-02-28 00:42:55.708466 | orchestrator | 2026-02-28 00:42:55.708477 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:42:55.708488 | orchestrator | Saturday 28 February 2026 00:42:49 +0000 (0:00:00.414) 0:00:01.187 ***** 2026-02-28 00:42:55.708499 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:42:55.708510 | orchestrator | 2026-02-28 00:42:55.708521 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:42:55.708532 | orchestrator | Saturday 28 February 2026 00:42:50 +0000 (0:00:00.186) 0:00:01.374 ***** 2026-02-28 00:42:55.708543 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:42:55.708554 | orchestrator | 2026-02-28 00:42:55.708564 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:42:55.708580 | orchestrator | Saturday 28 February 2026 00:42:50 +0000 (0:00:00.185) 0:00:01.559 ***** 2026-02-28 00:42:55.708592 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:42:55.708603 | orchestrator | 2026-02-28 00:42:55.708614 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:42:55.708626 | orchestrator | Saturday 28 February 2026 00:42:50 +0000 (0:00:00.182) 0:00:01.742 ***** 2026-02-28 00:42:55.708637 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:42:55.708648 | orchestrator | 2026-02-28 00:42:55.708660 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:42:55.708671 | orchestrator | Saturday 28 February 2026 00:42:50 +0000 (0:00:00.184) 0:00:01.927 ***** 2026-02-28 00:42:55.708681 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:42:55.708692 | orchestrator | 2026-02-28 00:42:55.708703 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:42:55.708714 | orchestrator | Saturday 28 February 2026 00:42:50 +0000 (0:00:00.188) 0:00:02.115 ***** 2026-02-28 00:42:55.708725 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:42:55.708736 | orchestrator | 2026-02-28 00:42:55.708747 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:42:55.708758 | orchestrator | Saturday 28 February 2026 00:42:50 +0000 (0:00:00.198) 0:00:02.313 ***** 2026-02-28 00:42:55.708769 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:42:55.708780 | orchestrator | 2026-02-28 00:42:55.708791 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:42:55.708802 | orchestrator | Saturday 28 February 2026 00:42:51 +0000 (0:00:00.189) 0:00:02.503 ***** 2026-02-28 00:42:55.708813 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:42:55.708824 | orchestrator | 2026-02-28 00:42:55.708835 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:42:55.708846 | orchestrator | Saturday 28 February 2026 00:42:51 +0000 (0:00:00.198) 0:00:02.701 ***** 2026-02-28 00:42:55.708858 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_84e1ce59-bd95-40da-9f03-5819b7d1b103) 2026-02-28 00:42:55.708869 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_84e1ce59-bd95-40da-9f03-5819b7d1b103) 2026-02-28 00:42:55.708881 | orchestrator | 2026-02-28 00:42:55.708892 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:42:55.708921 | orchestrator | Saturday 28 February 2026 00:42:51 +0000 (0:00:00.385) 0:00:03.087 ***** 2026-02-28 00:42:55.708933 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_9a1bcb93-f154-4a17-8f9d-a00d049f4cc1) 2026-02-28 00:42:55.708944 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_9a1bcb93-f154-4a17-8f9d-a00d049f4cc1) 2026-02-28 00:42:55.708955 | orchestrator | 2026-02-28 00:42:55.708970 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:42:55.708988 | orchestrator | Saturday 28 February 2026 00:42:52 +0000 (0:00:00.554) 0:00:03.641 ***** 2026-02-28 00:42:55.708999 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_16fcc6e7-951a-43ed-8f3a-017ae19ace76) 2026-02-28 00:42:55.709010 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_16fcc6e7-951a-43ed-8f3a-017ae19ace76) 2026-02-28 00:42:55.709021 | orchestrator | 2026-02-28 00:42:55.709059 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:42:55.709071 | orchestrator | Saturday 28 February 2026 00:42:52 +0000 (0:00:00.527) 0:00:04.169 ***** 2026-02-28 00:42:55.709082 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_afb4b4ce-eec7-46b2-91b5-87577cac503b) 2026-02-28 00:42:55.709093 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_afb4b4ce-eec7-46b2-91b5-87577cac503b) 2026-02-28 00:42:55.709105 | orchestrator | 2026-02-28 00:42:55.709116 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:42:55.709127 | orchestrator | Saturday 28 February 2026 00:42:53 +0000 (0:00:00.704) 0:00:04.873 ***** 2026-02-28 00:42:55.709138 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-02-28 00:42:55.709149 | orchestrator | 2026-02-28 00:42:55.709160 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:42:55.709171 | orchestrator | Saturday 28 February 2026 00:42:53 +0000 (0:00:00.295) 0:00:05.168 ***** 2026-02-28 00:42:55.709182 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-02-28 00:42:55.709193 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-02-28 00:42:55.709204 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-02-28 00:42:55.709215 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-02-28 00:42:55.709226 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-02-28 00:42:55.709237 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-02-28 00:42:55.709248 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-02-28 00:42:55.709259 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-02-28 00:42:55.709270 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-02-28 00:42:55.709281 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-02-28 00:42:55.709292 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-02-28 00:42:55.709303 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-02-28 00:42:55.709314 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-02-28 00:42:55.709325 | orchestrator | 2026-02-28 00:42:55.709336 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:42:55.709347 | orchestrator | Saturday 28 February 2026 00:42:54 +0000 (0:00:00.379) 0:00:05.548 ***** 2026-02-28 00:42:55.709358 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:42:55.709369 | orchestrator | 2026-02-28 00:42:55.709380 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:42:55.709391 | orchestrator | Saturday 28 February 2026 00:42:54 +0000 (0:00:00.215) 0:00:05.764 ***** 2026-02-28 00:42:55.709402 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:42:55.709413 | orchestrator | 2026-02-28 00:42:55.709424 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:42:55.709435 | orchestrator | Saturday 28 February 2026 00:42:54 +0000 (0:00:00.216) 0:00:05.980 ***** 2026-02-28 00:42:55.709446 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:42:55.709463 | orchestrator | 2026-02-28 00:42:55.709474 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:42:55.709485 | orchestrator | Saturday 28 February 2026 00:42:54 +0000 (0:00:00.202) 0:00:06.183 ***** 2026-02-28 00:42:55.709496 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:42:55.709507 | orchestrator | 2026-02-28 00:42:55.709518 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:42:55.709529 | orchestrator | Saturday 28 February 2026 00:42:55 +0000 (0:00:00.208) 0:00:06.391 ***** 2026-02-28 00:42:55.709540 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:42:55.709551 | orchestrator | 2026-02-28 00:42:55.709562 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:42:55.709574 | orchestrator | Saturday 28 February 2026 00:42:55 +0000 (0:00:00.217) 0:00:06.608 ***** 2026-02-28 00:42:55.709585 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:42:55.709595 | orchestrator | 2026-02-28 00:42:55.709606 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:42:55.709618 | orchestrator | Saturday 28 February 2026 00:42:55 +0000 (0:00:00.229) 0:00:06.838 ***** 2026-02-28 00:42:55.709629 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:42:55.709640 | orchestrator | 2026-02-28 00:42:55.709657 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:43:03.731656 | orchestrator | Saturday 28 February 2026 00:42:55 +0000 (0:00:00.204) 0:00:07.042 ***** 2026-02-28 00:43:03.731758 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:43:03.731773 | orchestrator | 2026-02-28 00:43:03.731785 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:43:03.731795 | orchestrator | Saturday 28 February 2026 00:42:55 +0000 (0:00:00.208) 0:00:07.251 ***** 2026-02-28 00:43:03.731805 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-02-28 00:43:03.731816 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-02-28 00:43:03.731827 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-02-28 00:43:03.731837 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-02-28 00:43:03.731846 | orchestrator | 2026-02-28 00:43:03.731856 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:43:03.731888 | orchestrator | Saturday 28 February 2026 00:42:56 +0000 (0:00:01.082) 0:00:08.333 ***** 2026-02-28 00:43:03.731906 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:43:03.731928 | orchestrator | 2026-02-28 00:43:03.731951 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:43:03.731967 | orchestrator | Saturday 28 February 2026 00:42:57 +0000 (0:00:00.199) 0:00:08.533 ***** 2026-02-28 00:43:03.731984 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:43:03.731999 | orchestrator | 2026-02-28 00:43:03.732015 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:43:03.732074 | orchestrator | Saturday 28 February 2026 00:42:57 +0000 (0:00:00.223) 0:00:08.756 ***** 2026-02-28 00:43:03.732091 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:43:03.732106 | orchestrator | 2026-02-28 00:43:03.732122 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:43:03.732138 | orchestrator | Saturday 28 February 2026 00:42:57 +0000 (0:00:00.207) 0:00:08.963 ***** 2026-02-28 00:43:03.732155 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:43:03.732170 | orchestrator | 2026-02-28 00:43:03.732187 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-02-28 00:43:03.732203 | orchestrator | Saturday 28 February 2026 00:42:57 +0000 (0:00:00.209) 0:00:09.173 ***** 2026-02-28 00:43:03.732222 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2026-02-28 00:43:03.732240 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2026-02-28 00:43:03.732258 | orchestrator | 2026-02-28 00:43:03.732270 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-02-28 00:43:03.732281 | orchestrator | Saturday 28 February 2026 00:42:58 +0000 (0:00:00.178) 0:00:09.352 ***** 2026-02-28 00:43:03.732318 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:43:03.732330 | orchestrator | 2026-02-28 00:43:03.732341 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-02-28 00:43:03.732353 | orchestrator | Saturday 28 February 2026 00:42:58 +0000 (0:00:00.135) 0:00:09.487 ***** 2026-02-28 00:43:03.732362 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:43:03.732372 | orchestrator | 2026-02-28 00:43:03.732381 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-02-28 00:43:03.732391 | orchestrator | Saturday 28 February 2026 00:42:58 +0000 (0:00:00.145) 0:00:09.633 ***** 2026-02-28 00:43:03.732401 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:43:03.732410 | orchestrator | 2026-02-28 00:43:03.732420 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-02-28 00:43:03.732430 | orchestrator | Saturday 28 February 2026 00:42:58 +0000 (0:00:00.141) 0:00:09.774 ***** 2026-02-28 00:43:03.732439 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:43:03.732449 | orchestrator | 2026-02-28 00:43:03.732459 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-02-28 00:43:03.732468 | orchestrator | Saturday 28 February 2026 00:42:58 +0000 (0:00:00.140) 0:00:09.915 ***** 2026-02-28 00:43:03.732479 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '4d18609e-ecdb-578d-a05b-e7913934f080'}}) 2026-02-28 00:43:03.732490 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'dcf33d59-3ae6-5017-b2aa-1b02884ceea7'}}) 2026-02-28 00:43:03.732499 | orchestrator | 2026-02-28 00:43:03.732509 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-02-28 00:43:03.732519 | orchestrator | Saturday 28 February 2026 00:42:58 +0000 (0:00:00.168) 0:00:10.084 ***** 2026-02-28 00:43:03.732530 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '4d18609e-ecdb-578d-a05b-e7913934f080'}})  2026-02-28 00:43:03.732548 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'dcf33d59-3ae6-5017-b2aa-1b02884ceea7'}})  2026-02-28 00:43:03.732566 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:43:03.732576 | orchestrator | 2026-02-28 00:43:03.732586 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-02-28 00:43:03.732595 | orchestrator | Saturday 28 February 2026 00:42:58 +0000 (0:00:00.166) 0:00:10.250 ***** 2026-02-28 00:43:03.732605 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '4d18609e-ecdb-578d-a05b-e7913934f080'}})  2026-02-28 00:43:03.732615 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'dcf33d59-3ae6-5017-b2aa-1b02884ceea7'}})  2026-02-28 00:43:03.732625 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:43:03.732635 | orchestrator | 2026-02-28 00:43:03.732644 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-02-28 00:43:03.732654 | orchestrator | Saturday 28 February 2026 00:42:59 +0000 (0:00:00.397) 0:00:10.648 ***** 2026-02-28 00:43:03.732664 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '4d18609e-ecdb-578d-a05b-e7913934f080'}})  2026-02-28 00:43:03.732693 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'dcf33d59-3ae6-5017-b2aa-1b02884ceea7'}})  2026-02-28 00:43:03.732709 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:43:03.732725 | orchestrator | 2026-02-28 00:43:03.732739 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-02-28 00:43:03.732753 | orchestrator | Saturday 28 February 2026 00:42:59 +0000 (0:00:00.166) 0:00:10.814 ***** 2026-02-28 00:43:03.732767 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:43:03.732783 | orchestrator | 2026-02-28 00:43:03.732801 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-02-28 00:43:03.732816 | orchestrator | Saturday 28 February 2026 00:42:59 +0000 (0:00:00.143) 0:00:10.957 ***** 2026-02-28 00:43:03.732832 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:43:03.732852 | orchestrator | 2026-02-28 00:43:03.732862 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-02-28 00:43:03.732872 | orchestrator | Saturday 28 February 2026 00:42:59 +0000 (0:00:00.141) 0:00:11.099 ***** 2026-02-28 00:43:03.732882 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:43:03.732892 | orchestrator | 2026-02-28 00:43:03.732902 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-02-28 00:43:03.732912 | orchestrator | Saturday 28 February 2026 00:42:59 +0000 (0:00:00.134) 0:00:11.233 ***** 2026-02-28 00:43:03.732922 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:43:03.732932 | orchestrator | 2026-02-28 00:43:03.732941 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-02-28 00:43:03.732951 | orchestrator | Saturday 28 February 2026 00:43:00 +0000 (0:00:00.124) 0:00:11.358 ***** 2026-02-28 00:43:03.732960 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:43:03.732970 | orchestrator | 2026-02-28 00:43:03.732980 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-02-28 00:43:03.733005 | orchestrator | Saturday 28 February 2026 00:43:00 +0000 (0:00:00.156) 0:00:11.515 ***** 2026-02-28 00:43:03.733025 | orchestrator | ok: [testbed-node-3] => { 2026-02-28 00:43:03.733063 | orchestrator |  "ceph_osd_devices": { 2026-02-28 00:43:03.733075 | orchestrator |  "sdb": { 2026-02-28 00:43:03.733086 | orchestrator |  "osd_lvm_uuid": "4d18609e-ecdb-578d-a05b-e7913934f080" 2026-02-28 00:43:03.733096 | orchestrator |  }, 2026-02-28 00:43:03.733106 | orchestrator |  "sdc": { 2026-02-28 00:43:03.733116 | orchestrator |  "osd_lvm_uuid": "dcf33d59-3ae6-5017-b2aa-1b02884ceea7" 2026-02-28 00:43:03.733126 | orchestrator |  } 2026-02-28 00:43:03.733136 | orchestrator |  } 2026-02-28 00:43:03.733146 | orchestrator | } 2026-02-28 00:43:03.733156 | orchestrator | 2026-02-28 00:43:03.733166 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-02-28 00:43:03.733175 | orchestrator | Saturday 28 February 2026 00:43:00 +0000 (0:00:00.147) 0:00:11.662 ***** 2026-02-28 00:43:03.733185 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:43:03.733195 | orchestrator | 2026-02-28 00:43:03.733205 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-02-28 00:43:03.733215 | orchestrator | Saturday 28 February 2026 00:43:00 +0000 (0:00:00.153) 0:00:11.816 ***** 2026-02-28 00:43:03.733225 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:43:03.733234 | orchestrator | 2026-02-28 00:43:03.733245 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-02-28 00:43:03.733254 | orchestrator | Saturday 28 February 2026 00:43:00 +0000 (0:00:00.137) 0:00:11.953 ***** 2026-02-28 00:43:03.733264 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:43:03.733274 | orchestrator | 2026-02-28 00:43:03.733284 | orchestrator | TASK [Print configuration data] ************************************************ 2026-02-28 00:43:03.733293 | orchestrator | Saturday 28 February 2026 00:43:00 +0000 (0:00:00.195) 0:00:12.149 ***** 2026-02-28 00:43:03.733303 | orchestrator | changed: [testbed-node-3] => { 2026-02-28 00:43:03.733326 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-02-28 00:43:03.733337 | orchestrator |  "ceph_osd_devices": { 2026-02-28 00:43:03.733356 | orchestrator |  "sdb": { 2026-02-28 00:43:03.733366 | orchestrator |  "osd_lvm_uuid": "4d18609e-ecdb-578d-a05b-e7913934f080" 2026-02-28 00:43:03.733376 | orchestrator |  }, 2026-02-28 00:43:03.733386 | orchestrator |  "sdc": { 2026-02-28 00:43:03.733396 | orchestrator |  "osd_lvm_uuid": "dcf33d59-3ae6-5017-b2aa-1b02884ceea7" 2026-02-28 00:43:03.733406 | orchestrator |  } 2026-02-28 00:43:03.733416 | orchestrator |  }, 2026-02-28 00:43:03.733426 | orchestrator |  "lvm_volumes": [ 2026-02-28 00:43:03.733436 | orchestrator |  { 2026-02-28 00:43:03.733446 | orchestrator |  "data": "osd-block-4d18609e-ecdb-578d-a05b-e7913934f080", 2026-02-28 00:43:03.733456 | orchestrator |  "data_vg": "ceph-4d18609e-ecdb-578d-a05b-e7913934f080" 2026-02-28 00:43:03.733472 | orchestrator |  }, 2026-02-28 00:43:03.733482 | orchestrator |  { 2026-02-28 00:43:03.733492 | orchestrator |  "data": "osd-block-dcf33d59-3ae6-5017-b2aa-1b02884ceea7", 2026-02-28 00:43:03.733502 | orchestrator |  "data_vg": "ceph-dcf33d59-3ae6-5017-b2aa-1b02884ceea7" 2026-02-28 00:43:03.733512 | orchestrator |  } 2026-02-28 00:43:03.733522 | orchestrator |  ] 2026-02-28 00:43:03.733532 | orchestrator |  } 2026-02-28 00:43:03.733542 | orchestrator | } 2026-02-28 00:43:03.733552 | orchestrator | 2026-02-28 00:43:03.733562 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-02-28 00:43:03.733572 | orchestrator | Saturday 28 February 2026 00:43:01 +0000 (0:00:00.540) 0:00:12.690 ***** 2026-02-28 00:43:03.733581 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-02-28 00:43:03.733591 | orchestrator | 2026-02-28 00:43:03.733601 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-02-28 00:43:03.733610 | orchestrator | 2026-02-28 00:43:03.733620 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-02-28 00:43:03.733630 | orchestrator | Saturday 28 February 2026 00:43:03 +0000 (0:00:01.855) 0:00:14.546 ***** 2026-02-28 00:43:03.733640 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-02-28 00:43:03.733649 | orchestrator | 2026-02-28 00:43:03.733659 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-02-28 00:43:03.733669 | orchestrator | Saturday 28 February 2026 00:43:03 +0000 (0:00:00.255) 0:00:14.802 ***** 2026-02-28 00:43:03.733679 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:43:03.733689 | orchestrator | 2026-02-28 00:43:03.733707 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:43:11.561470 | orchestrator | Saturday 28 February 2026 00:43:03 +0000 (0:00:00.268) 0:00:15.070 ***** 2026-02-28 00:43:11.561563 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-02-28 00:43:11.561578 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-02-28 00:43:11.561589 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-02-28 00:43:11.561600 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-02-28 00:43:11.561611 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-02-28 00:43:11.561622 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-02-28 00:43:11.561633 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-02-28 00:43:11.561648 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-02-28 00:43:11.561659 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-02-28 00:43:11.561670 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-02-28 00:43:11.561681 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-02-28 00:43:11.561692 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-02-28 00:43:11.561719 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-02-28 00:43:11.561731 | orchestrator | 2026-02-28 00:43:11.561743 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:43:11.561754 | orchestrator | Saturday 28 February 2026 00:43:04 +0000 (0:00:00.401) 0:00:15.472 ***** 2026-02-28 00:43:11.561764 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:43:11.561776 | orchestrator | 2026-02-28 00:43:11.561787 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:43:11.561798 | orchestrator | Saturday 28 February 2026 00:43:04 +0000 (0:00:00.271) 0:00:15.744 ***** 2026-02-28 00:43:11.561831 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:43:11.561842 | orchestrator | 2026-02-28 00:43:11.561853 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:43:11.561864 | orchestrator | Saturday 28 February 2026 00:43:04 +0000 (0:00:00.268) 0:00:16.012 ***** 2026-02-28 00:43:11.561875 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:43:11.561885 | orchestrator | 2026-02-28 00:43:11.561896 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:43:11.561907 | orchestrator | Saturday 28 February 2026 00:43:04 +0000 (0:00:00.208) 0:00:16.221 ***** 2026-02-28 00:43:11.561918 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:43:11.561929 | orchestrator | 2026-02-28 00:43:11.561940 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:43:11.561950 | orchestrator | Saturday 28 February 2026 00:43:05 +0000 (0:00:00.249) 0:00:16.470 ***** 2026-02-28 00:43:11.561961 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:43:11.561972 | orchestrator | 2026-02-28 00:43:11.561983 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:43:11.561994 | orchestrator | Saturday 28 February 2026 00:43:05 +0000 (0:00:00.700) 0:00:17.171 ***** 2026-02-28 00:43:11.562004 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:43:11.562087 | orchestrator | 2026-02-28 00:43:11.562103 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:43:11.562116 | orchestrator | Saturday 28 February 2026 00:43:06 +0000 (0:00:00.214) 0:00:17.385 ***** 2026-02-28 00:43:11.562128 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:43:11.562139 | orchestrator | 2026-02-28 00:43:11.562152 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:43:11.562164 | orchestrator | Saturday 28 February 2026 00:43:06 +0000 (0:00:00.200) 0:00:17.585 ***** 2026-02-28 00:43:11.562176 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:43:11.562188 | orchestrator | 2026-02-28 00:43:11.562200 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:43:11.562212 | orchestrator | Saturday 28 February 2026 00:43:06 +0000 (0:00:00.200) 0:00:17.785 ***** 2026-02-28 00:43:11.562225 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_85345139-bc47-4fee-b6f9-5fb160253b97) 2026-02-28 00:43:11.562238 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_85345139-bc47-4fee-b6f9-5fb160253b97) 2026-02-28 00:43:11.562250 | orchestrator | 2026-02-28 00:43:11.562263 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:43:11.562275 | orchestrator | Saturday 28 February 2026 00:43:06 +0000 (0:00:00.415) 0:00:18.201 ***** 2026-02-28 00:43:11.562287 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_2388cee9-22a9-4416-93b3-e236454bc031) 2026-02-28 00:43:11.562300 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_2388cee9-22a9-4416-93b3-e236454bc031) 2026-02-28 00:43:11.562312 | orchestrator | 2026-02-28 00:43:11.562324 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:43:11.562336 | orchestrator | Saturday 28 February 2026 00:43:07 +0000 (0:00:00.426) 0:00:18.627 ***** 2026-02-28 00:43:11.562348 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_fa3b351c-e54b-439c-bac1-d7e08e27df4b) 2026-02-28 00:43:11.562361 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_fa3b351c-e54b-439c-bac1-d7e08e27df4b) 2026-02-28 00:43:11.562373 | orchestrator | 2026-02-28 00:43:11.562386 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:43:11.562415 | orchestrator | Saturday 28 February 2026 00:43:07 +0000 (0:00:00.424) 0:00:19.052 ***** 2026-02-28 00:43:11.562427 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_151bff65-91b8-4b11-a525-96a3d98709b9) 2026-02-28 00:43:11.562438 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_151bff65-91b8-4b11-a525-96a3d98709b9) 2026-02-28 00:43:11.562449 | orchestrator | 2026-02-28 00:43:11.562468 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:43:11.562479 | orchestrator | Saturday 28 February 2026 00:43:08 +0000 (0:00:00.437) 0:00:19.490 ***** 2026-02-28 00:43:11.562490 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-02-28 00:43:11.562501 | orchestrator | 2026-02-28 00:43:11.562512 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:43:11.562523 | orchestrator | Saturday 28 February 2026 00:43:08 +0000 (0:00:00.321) 0:00:19.812 ***** 2026-02-28 00:43:11.562534 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-02-28 00:43:11.562544 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-02-28 00:43:11.562562 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-02-28 00:43:11.562574 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-02-28 00:43:11.562585 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-02-28 00:43:11.562595 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-02-28 00:43:11.562606 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-02-28 00:43:11.562617 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-02-28 00:43:11.562628 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-02-28 00:43:11.562638 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-02-28 00:43:11.562649 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-02-28 00:43:11.562660 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-02-28 00:43:11.562671 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-02-28 00:43:11.562682 | orchestrator | 2026-02-28 00:43:11.562693 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:43:11.562704 | orchestrator | Saturday 28 February 2026 00:43:08 +0000 (0:00:00.379) 0:00:20.191 ***** 2026-02-28 00:43:11.562715 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:43:11.562725 | orchestrator | 2026-02-28 00:43:11.562736 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:43:11.562747 | orchestrator | Saturday 28 February 2026 00:43:09 +0000 (0:00:00.641) 0:00:20.833 ***** 2026-02-28 00:43:11.562758 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:43:11.562769 | orchestrator | 2026-02-28 00:43:11.562780 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:43:11.562791 | orchestrator | Saturday 28 February 2026 00:43:09 +0000 (0:00:00.173) 0:00:21.006 ***** 2026-02-28 00:43:11.562802 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:43:11.562813 | orchestrator | 2026-02-28 00:43:11.562824 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:43:11.562835 | orchestrator | Saturday 28 February 2026 00:43:09 +0000 (0:00:00.171) 0:00:21.177 ***** 2026-02-28 00:43:11.562845 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:43:11.562856 | orchestrator | 2026-02-28 00:43:11.562867 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:43:11.562878 | orchestrator | Saturday 28 February 2026 00:43:10 +0000 (0:00:00.172) 0:00:21.349 ***** 2026-02-28 00:43:11.562889 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:43:11.562900 | orchestrator | 2026-02-28 00:43:11.562910 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:43:11.562921 | orchestrator | Saturday 28 February 2026 00:43:10 +0000 (0:00:00.156) 0:00:21.506 ***** 2026-02-28 00:43:11.562932 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:43:11.562949 | orchestrator | 2026-02-28 00:43:11.562960 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:43:11.562971 | orchestrator | Saturday 28 February 2026 00:43:10 +0000 (0:00:00.191) 0:00:21.698 ***** 2026-02-28 00:43:11.562993 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:43:11.563004 | orchestrator | 2026-02-28 00:43:11.563015 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:43:11.563086 | orchestrator | Saturday 28 February 2026 00:43:10 +0000 (0:00:00.155) 0:00:21.853 ***** 2026-02-28 00:43:11.563101 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:43:11.563112 | orchestrator | 2026-02-28 00:43:11.563123 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:43:11.563134 | orchestrator | Saturday 28 February 2026 00:43:10 +0000 (0:00:00.164) 0:00:22.018 ***** 2026-02-28 00:43:11.563145 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-02-28 00:43:11.563156 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-02-28 00:43:11.563167 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-02-28 00:43:11.563178 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-02-28 00:43:11.563189 | orchestrator | 2026-02-28 00:43:11.563200 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:43:11.563211 | orchestrator | Saturday 28 February 2026 00:43:11 +0000 (0:00:00.777) 0:00:22.796 ***** 2026-02-28 00:43:11.563222 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:43:17.953602 | orchestrator | 2026-02-28 00:43:17.953751 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:43:17.953781 | orchestrator | Saturday 28 February 2026 00:43:11 +0000 (0:00:00.172) 0:00:22.968 ***** 2026-02-28 00:43:17.953795 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:43:17.953808 | orchestrator | 2026-02-28 00:43:17.953824 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:43:17.953851 | orchestrator | Saturday 28 February 2026 00:43:11 +0000 (0:00:00.163) 0:00:23.132 ***** 2026-02-28 00:43:17.953874 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:43:17.953893 | orchestrator | 2026-02-28 00:43:17.953911 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:43:17.953938 | orchestrator | Saturday 28 February 2026 00:43:11 +0000 (0:00:00.163) 0:00:23.295 ***** 2026-02-28 00:43:17.953957 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:43:17.953973 | orchestrator | 2026-02-28 00:43:17.953989 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-02-28 00:43:17.954007 | orchestrator | Saturday 28 February 2026 00:43:12 +0000 (0:00:00.498) 0:00:23.794 ***** 2026-02-28 00:43:17.954124 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2026-02-28 00:43:17.954148 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2026-02-28 00:43:17.954168 | orchestrator | 2026-02-28 00:43:17.954187 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-02-28 00:43:17.954237 | orchestrator | Saturday 28 February 2026 00:43:12 +0000 (0:00:00.153) 0:00:23.947 ***** 2026-02-28 00:43:17.954264 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:43:17.954288 | orchestrator | 2026-02-28 00:43:17.954315 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-02-28 00:43:17.954336 | orchestrator | Saturday 28 February 2026 00:43:12 +0000 (0:00:00.127) 0:00:24.075 ***** 2026-02-28 00:43:17.954356 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:43:17.954378 | orchestrator | 2026-02-28 00:43:17.954398 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-02-28 00:43:17.954421 | orchestrator | Saturday 28 February 2026 00:43:12 +0000 (0:00:00.135) 0:00:24.211 ***** 2026-02-28 00:43:17.954433 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:43:17.954444 | orchestrator | 2026-02-28 00:43:17.954454 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-02-28 00:43:17.954466 | orchestrator | Saturday 28 February 2026 00:43:13 +0000 (0:00:00.157) 0:00:24.368 ***** 2026-02-28 00:43:17.954500 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:43:17.954512 | orchestrator | 2026-02-28 00:43:17.954523 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-02-28 00:43:17.954534 | orchestrator | Saturday 28 February 2026 00:43:13 +0000 (0:00:00.145) 0:00:24.513 ***** 2026-02-28 00:43:17.954546 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '73c4f4bf-6139-5634-9e57-de597eca9964'}}) 2026-02-28 00:43:17.954557 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '17f6d453-f54a-57d2-bd55-b12b469b0db8'}}) 2026-02-28 00:43:17.954568 | orchestrator | 2026-02-28 00:43:17.954579 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-02-28 00:43:17.954590 | orchestrator | Saturday 28 February 2026 00:43:13 +0000 (0:00:00.140) 0:00:24.654 ***** 2026-02-28 00:43:17.954602 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '73c4f4bf-6139-5634-9e57-de597eca9964'}})  2026-02-28 00:43:17.954614 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '17f6d453-f54a-57d2-bd55-b12b469b0db8'}})  2026-02-28 00:43:17.954625 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:43:17.954636 | orchestrator | 2026-02-28 00:43:17.954647 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-02-28 00:43:17.954658 | orchestrator | Saturday 28 February 2026 00:43:13 +0000 (0:00:00.140) 0:00:24.795 ***** 2026-02-28 00:43:17.954669 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '73c4f4bf-6139-5634-9e57-de597eca9964'}})  2026-02-28 00:43:17.954680 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '17f6d453-f54a-57d2-bd55-b12b469b0db8'}})  2026-02-28 00:43:17.954692 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:43:17.954703 | orchestrator | 2026-02-28 00:43:17.954713 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-02-28 00:43:17.954724 | orchestrator | Saturday 28 February 2026 00:43:13 +0000 (0:00:00.135) 0:00:24.930 ***** 2026-02-28 00:43:17.954741 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '73c4f4bf-6139-5634-9e57-de597eca9964'}})  2026-02-28 00:43:17.954759 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '17f6d453-f54a-57d2-bd55-b12b469b0db8'}})  2026-02-28 00:43:17.954777 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:43:17.954797 | orchestrator | 2026-02-28 00:43:17.954814 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-02-28 00:43:17.954834 | orchestrator | Saturday 28 February 2026 00:43:13 +0000 (0:00:00.138) 0:00:25.068 ***** 2026-02-28 00:43:17.954847 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:43:17.954858 | orchestrator | 2026-02-28 00:43:17.954869 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-02-28 00:43:17.954880 | orchestrator | Saturday 28 February 2026 00:43:13 +0000 (0:00:00.108) 0:00:25.177 ***** 2026-02-28 00:43:17.954890 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:43:17.954901 | orchestrator | 2026-02-28 00:43:17.954912 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-02-28 00:43:17.954923 | orchestrator | Saturday 28 February 2026 00:43:13 +0000 (0:00:00.113) 0:00:25.290 ***** 2026-02-28 00:43:17.954958 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:43:17.954970 | orchestrator | 2026-02-28 00:43:17.954981 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-02-28 00:43:17.954992 | orchestrator | Saturday 28 February 2026 00:43:14 +0000 (0:00:00.326) 0:00:25.617 ***** 2026-02-28 00:43:17.955002 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:43:17.955013 | orchestrator | 2026-02-28 00:43:17.955051 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-02-28 00:43:17.955067 | orchestrator | Saturday 28 February 2026 00:43:14 +0000 (0:00:00.145) 0:00:25.762 ***** 2026-02-28 00:43:17.955078 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:43:17.955099 | orchestrator | 2026-02-28 00:43:17.955110 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-02-28 00:43:17.955121 | orchestrator | Saturday 28 February 2026 00:43:14 +0000 (0:00:00.130) 0:00:25.893 ***** 2026-02-28 00:43:17.955132 | orchestrator | ok: [testbed-node-4] => { 2026-02-28 00:43:17.955143 | orchestrator |  "ceph_osd_devices": { 2026-02-28 00:43:17.955154 | orchestrator |  "sdb": { 2026-02-28 00:43:17.955166 | orchestrator |  "osd_lvm_uuid": "73c4f4bf-6139-5634-9e57-de597eca9964" 2026-02-28 00:43:17.955177 | orchestrator |  }, 2026-02-28 00:43:17.955188 | orchestrator |  "sdc": { 2026-02-28 00:43:17.955199 | orchestrator |  "osd_lvm_uuid": "17f6d453-f54a-57d2-bd55-b12b469b0db8" 2026-02-28 00:43:17.955211 | orchestrator |  } 2026-02-28 00:43:17.955221 | orchestrator |  } 2026-02-28 00:43:17.955232 | orchestrator | } 2026-02-28 00:43:17.955244 | orchestrator | 2026-02-28 00:43:17.955255 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-02-28 00:43:17.955266 | orchestrator | Saturday 28 February 2026 00:43:14 +0000 (0:00:00.146) 0:00:26.039 ***** 2026-02-28 00:43:17.955276 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:43:17.955287 | orchestrator | 2026-02-28 00:43:17.955298 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-02-28 00:43:17.955309 | orchestrator | Saturday 28 February 2026 00:43:14 +0000 (0:00:00.142) 0:00:26.182 ***** 2026-02-28 00:43:17.955319 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:43:17.955330 | orchestrator | 2026-02-28 00:43:17.955341 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-02-28 00:43:17.955352 | orchestrator | Saturday 28 February 2026 00:43:14 +0000 (0:00:00.137) 0:00:26.320 ***** 2026-02-28 00:43:17.955363 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:43:17.955373 | orchestrator | 2026-02-28 00:43:17.955384 | orchestrator | TASK [Print configuration data] ************************************************ 2026-02-28 00:43:17.955402 | orchestrator | Saturday 28 February 2026 00:43:15 +0000 (0:00:00.139) 0:00:26.459 ***** 2026-02-28 00:43:17.955413 | orchestrator | changed: [testbed-node-4] => { 2026-02-28 00:43:17.955424 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-02-28 00:43:17.955435 | orchestrator |  "ceph_osd_devices": { 2026-02-28 00:43:17.955446 | orchestrator |  "sdb": { 2026-02-28 00:43:17.955457 | orchestrator |  "osd_lvm_uuid": "73c4f4bf-6139-5634-9e57-de597eca9964" 2026-02-28 00:43:17.955468 | orchestrator |  }, 2026-02-28 00:43:17.955479 | orchestrator |  "sdc": { 2026-02-28 00:43:17.955490 | orchestrator |  "osd_lvm_uuid": "17f6d453-f54a-57d2-bd55-b12b469b0db8" 2026-02-28 00:43:17.955501 | orchestrator |  } 2026-02-28 00:43:17.955512 | orchestrator |  }, 2026-02-28 00:43:17.955523 | orchestrator |  "lvm_volumes": [ 2026-02-28 00:43:17.955534 | orchestrator |  { 2026-02-28 00:43:17.955546 | orchestrator |  "data": "osd-block-73c4f4bf-6139-5634-9e57-de597eca9964", 2026-02-28 00:43:17.955557 | orchestrator |  "data_vg": "ceph-73c4f4bf-6139-5634-9e57-de597eca9964" 2026-02-28 00:43:17.955568 | orchestrator |  }, 2026-02-28 00:43:17.955579 | orchestrator |  { 2026-02-28 00:43:17.955590 | orchestrator |  "data": "osd-block-17f6d453-f54a-57d2-bd55-b12b469b0db8", 2026-02-28 00:43:17.955601 | orchestrator |  "data_vg": "ceph-17f6d453-f54a-57d2-bd55-b12b469b0db8" 2026-02-28 00:43:17.955612 | orchestrator |  } 2026-02-28 00:43:17.955623 | orchestrator |  ] 2026-02-28 00:43:17.955634 | orchestrator |  } 2026-02-28 00:43:17.955645 | orchestrator | } 2026-02-28 00:43:17.955656 | orchestrator | 2026-02-28 00:43:17.955667 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-02-28 00:43:17.955678 | orchestrator | Saturday 28 February 2026 00:43:15 +0000 (0:00:00.264) 0:00:26.723 ***** 2026-02-28 00:43:17.955688 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-02-28 00:43:17.955699 | orchestrator | 2026-02-28 00:43:17.955717 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-02-28 00:43:17.955728 | orchestrator | 2026-02-28 00:43:17.955739 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-02-28 00:43:17.955750 | orchestrator | Saturday 28 February 2026 00:43:16 +0000 (0:00:01.248) 0:00:27.971 ***** 2026-02-28 00:43:17.955761 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-02-28 00:43:17.955772 | orchestrator | 2026-02-28 00:43:17.955783 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-02-28 00:43:17.955794 | orchestrator | Saturday 28 February 2026 00:43:17 +0000 (0:00:00.761) 0:00:28.732 ***** 2026-02-28 00:43:17.955805 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:43:17.955816 | orchestrator | 2026-02-28 00:43:17.955827 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:43:17.955838 | orchestrator | Saturday 28 February 2026 00:43:17 +0000 (0:00:00.239) 0:00:28.972 ***** 2026-02-28 00:43:17.955849 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-02-28 00:43:17.955860 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-02-28 00:43:17.955871 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-02-28 00:43:17.955882 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-02-28 00:43:17.955893 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-02-28 00:43:17.955911 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-02-28 00:43:27.033743 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-02-28 00:43:27.033883 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-02-28 00:43:27.033914 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-02-28 00:43:27.033933 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-02-28 00:43:27.033951 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-02-28 00:43:27.033969 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-02-28 00:43:27.033987 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-02-28 00:43:27.034004 | orchestrator | 2026-02-28 00:43:27.034219 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:43:27.034239 | orchestrator | Saturday 28 February 2026 00:43:18 +0000 (0:00:00.417) 0:00:29.390 ***** 2026-02-28 00:43:27.034258 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:43:27.034278 | orchestrator | 2026-02-28 00:43:27.034296 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:43:27.034315 | orchestrator | Saturday 28 February 2026 00:43:18 +0000 (0:00:00.201) 0:00:29.592 ***** 2026-02-28 00:43:27.034333 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:43:27.034350 | orchestrator | 2026-02-28 00:43:27.034367 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:43:27.034385 | orchestrator | Saturday 28 February 2026 00:43:18 +0000 (0:00:00.243) 0:00:29.835 ***** 2026-02-28 00:43:27.034402 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:43:27.034420 | orchestrator | 2026-02-28 00:43:27.034437 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:43:27.034454 | orchestrator | Saturday 28 February 2026 00:43:18 +0000 (0:00:00.202) 0:00:30.037 ***** 2026-02-28 00:43:27.034472 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:43:27.034489 | orchestrator | 2026-02-28 00:43:27.034506 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:43:27.034522 | orchestrator | Saturday 28 February 2026 00:43:18 +0000 (0:00:00.254) 0:00:30.291 ***** 2026-02-28 00:43:27.034568 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:43:27.034586 | orchestrator | 2026-02-28 00:43:27.034601 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:43:27.034618 | orchestrator | Saturday 28 February 2026 00:43:19 +0000 (0:00:00.239) 0:00:30.531 ***** 2026-02-28 00:43:27.034635 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:43:27.034652 | orchestrator | 2026-02-28 00:43:27.034668 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:43:27.034685 | orchestrator | Saturday 28 February 2026 00:43:19 +0000 (0:00:00.253) 0:00:30.784 ***** 2026-02-28 00:43:27.034701 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:43:27.034718 | orchestrator | 2026-02-28 00:43:27.034734 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:43:27.034751 | orchestrator | Saturday 28 February 2026 00:43:19 +0000 (0:00:00.282) 0:00:31.066 ***** 2026-02-28 00:43:27.034767 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:43:27.034783 | orchestrator | 2026-02-28 00:43:27.034799 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:43:27.034816 | orchestrator | Saturday 28 February 2026 00:43:20 +0000 (0:00:00.286) 0:00:31.352 ***** 2026-02-28 00:43:27.034833 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_ea0db2d4-7821-45ce-aa4b-0ff26e9cf878) 2026-02-28 00:43:27.034851 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_ea0db2d4-7821-45ce-aa4b-0ff26e9cf878) 2026-02-28 00:43:27.034868 | orchestrator | 2026-02-28 00:43:27.034885 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:43:27.034902 | orchestrator | Saturday 28 February 2026 00:43:21 +0000 (0:00:01.131) 0:00:32.483 ***** 2026-02-28 00:43:27.034939 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_96cb3389-09b8-4702-8328-a447a406a3bc) 2026-02-28 00:43:27.034956 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_96cb3389-09b8-4702-8328-a447a406a3bc) 2026-02-28 00:43:27.034972 | orchestrator | 2026-02-28 00:43:27.034989 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:43:27.035005 | orchestrator | Saturday 28 February 2026 00:43:21 +0000 (0:00:00.436) 0:00:32.920 ***** 2026-02-28 00:43:27.035045 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_810ccfde-b37c-4538-b69c-a55db736621a) 2026-02-28 00:43:27.035064 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_810ccfde-b37c-4538-b69c-a55db736621a) 2026-02-28 00:43:27.035080 | orchestrator | 2026-02-28 00:43:27.035096 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:43:27.035113 | orchestrator | Saturday 28 February 2026 00:43:22 +0000 (0:00:00.507) 0:00:33.427 ***** 2026-02-28 00:43:27.035130 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_aebfdaae-19e6-4277-9533-aca5f477cfa9) 2026-02-28 00:43:27.035146 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_aebfdaae-19e6-4277-9533-aca5f477cfa9) 2026-02-28 00:43:27.035163 | orchestrator | 2026-02-28 00:43:27.035180 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:43:27.035197 | orchestrator | Saturday 28 February 2026 00:43:22 +0000 (0:00:00.480) 0:00:33.908 ***** 2026-02-28 00:43:27.035213 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-02-28 00:43:27.035229 | orchestrator | 2026-02-28 00:43:27.035241 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:43:27.035275 | orchestrator | Saturday 28 February 2026 00:43:22 +0000 (0:00:00.331) 0:00:34.239 ***** 2026-02-28 00:43:27.035286 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-02-28 00:43:27.035303 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-02-28 00:43:27.035320 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-02-28 00:43:27.035336 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-02-28 00:43:27.035366 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-02-28 00:43:27.035385 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-02-28 00:43:27.035401 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-02-28 00:43:27.035418 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-02-28 00:43:27.035435 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-02-28 00:43:27.035452 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-02-28 00:43:27.035470 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-02-28 00:43:27.035481 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-02-28 00:43:27.035490 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-02-28 00:43:27.035500 | orchestrator | 2026-02-28 00:43:27.035510 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:43:27.035519 | orchestrator | Saturday 28 February 2026 00:43:23 +0000 (0:00:00.409) 0:00:34.648 ***** 2026-02-28 00:43:27.035529 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:43:27.035539 | orchestrator | 2026-02-28 00:43:27.035549 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:43:27.035558 | orchestrator | Saturday 28 February 2026 00:43:23 +0000 (0:00:00.203) 0:00:34.852 ***** 2026-02-28 00:43:27.035568 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:43:27.035577 | orchestrator | 2026-02-28 00:43:27.035587 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:43:27.035596 | orchestrator | Saturday 28 February 2026 00:43:23 +0000 (0:00:00.186) 0:00:35.038 ***** 2026-02-28 00:43:27.035606 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:43:27.035616 | orchestrator | 2026-02-28 00:43:27.035626 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:43:27.035635 | orchestrator | Saturday 28 February 2026 00:43:23 +0000 (0:00:00.234) 0:00:35.273 ***** 2026-02-28 00:43:27.035645 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:43:27.035654 | orchestrator | 2026-02-28 00:43:27.035664 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:43:27.035674 | orchestrator | Saturday 28 February 2026 00:43:24 +0000 (0:00:00.199) 0:00:35.473 ***** 2026-02-28 00:43:27.035683 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:43:27.035693 | orchestrator | 2026-02-28 00:43:27.035702 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:43:27.035712 | orchestrator | Saturday 28 February 2026 00:43:24 +0000 (0:00:00.209) 0:00:35.682 ***** 2026-02-28 00:43:27.035722 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:43:27.035731 | orchestrator | 2026-02-28 00:43:27.035740 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:43:27.035748 | orchestrator | Saturday 28 February 2026 00:43:25 +0000 (0:00:00.691) 0:00:36.373 ***** 2026-02-28 00:43:27.035756 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:43:27.035764 | orchestrator | 2026-02-28 00:43:27.035772 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:43:27.035780 | orchestrator | Saturday 28 February 2026 00:43:25 +0000 (0:00:00.266) 0:00:36.639 ***** 2026-02-28 00:43:27.035788 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:43:27.035795 | orchestrator | 2026-02-28 00:43:27.035803 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:43:27.035811 | orchestrator | Saturday 28 February 2026 00:43:25 +0000 (0:00:00.207) 0:00:36.847 ***** 2026-02-28 00:43:27.035819 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-02-28 00:43:27.035833 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-02-28 00:43:27.035842 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-02-28 00:43:27.035850 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-02-28 00:43:27.035857 | orchestrator | 2026-02-28 00:43:27.035866 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:43:27.035874 | orchestrator | Saturday 28 February 2026 00:43:26 +0000 (0:00:00.687) 0:00:37.534 ***** 2026-02-28 00:43:27.035881 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:43:27.035900 | orchestrator | 2026-02-28 00:43:27.035908 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:43:27.035916 | orchestrator | Saturday 28 February 2026 00:43:26 +0000 (0:00:00.214) 0:00:37.749 ***** 2026-02-28 00:43:27.035924 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:43:27.035932 | orchestrator | 2026-02-28 00:43:27.035939 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:43:27.035947 | orchestrator | Saturday 28 February 2026 00:43:26 +0000 (0:00:00.201) 0:00:37.950 ***** 2026-02-28 00:43:27.035955 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:43:27.035963 | orchestrator | 2026-02-28 00:43:27.035971 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:43:27.035979 | orchestrator | Saturday 28 February 2026 00:43:26 +0000 (0:00:00.216) 0:00:38.167 ***** 2026-02-28 00:43:27.035987 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:43:27.035995 | orchestrator | 2026-02-28 00:43:27.036009 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-02-28 00:43:31.199394 | orchestrator | Saturday 28 February 2026 00:43:27 +0000 (0:00:00.194) 0:00:38.361 ***** 2026-02-28 00:43:31.199507 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2026-02-28 00:43:31.199534 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2026-02-28 00:43:31.199554 | orchestrator | 2026-02-28 00:43:31.199574 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-02-28 00:43:31.199593 | orchestrator | Saturday 28 February 2026 00:43:27 +0000 (0:00:00.184) 0:00:38.546 ***** 2026-02-28 00:43:31.199611 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:43:31.199632 | orchestrator | 2026-02-28 00:43:31.199650 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-02-28 00:43:31.199670 | orchestrator | Saturday 28 February 2026 00:43:27 +0000 (0:00:00.158) 0:00:38.705 ***** 2026-02-28 00:43:31.199711 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:43:31.199731 | orchestrator | 2026-02-28 00:43:31.199751 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-02-28 00:43:31.199769 | orchestrator | Saturday 28 February 2026 00:43:27 +0000 (0:00:00.129) 0:00:38.835 ***** 2026-02-28 00:43:31.199787 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:43:31.199807 | orchestrator | 2026-02-28 00:43:31.199827 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-02-28 00:43:31.199848 | orchestrator | Saturday 28 February 2026 00:43:27 +0000 (0:00:00.412) 0:00:39.247 ***** 2026-02-28 00:43:31.199868 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:43:31.199889 | orchestrator | 2026-02-28 00:43:31.199907 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-02-28 00:43:31.199926 | orchestrator | Saturday 28 February 2026 00:43:28 +0000 (0:00:00.122) 0:00:39.369 ***** 2026-02-28 00:43:31.199945 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '04fa4cbf-2eb3-5c27-a3dd-f7c2dcd9ac18'}}) 2026-02-28 00:43:31.199972 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'f45f70cf-4b1a-5b52-bc0a-6a4d28c0a539'}}) 2026-02-28 00:43:31.199994 | orchestrator | 2026-02-28 00:43:31.200013 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-02-28 00:43:31.200061 | orchestrator | Saturday 28 February 2026 00:43:28 +0000 (0:00:00.178) 0:00:39.548 ***** 2026-02-28 00:43:31.200082 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '04fa4cbf-2eb3-5c27-a3dd-f7c2dcd9ac18'}})  2026-02-28 00:43:31.200130 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'f45f70cf-4b1a-5b52-bc0a-6a4d28c0a539'}})  2026-02-28 00:43:31.200151 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:43:31.200171 | orchestrator | 2026-02-28 00:43:31.200190 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-02-28 00:43:31.200209 | orchestrator | Saturday 28 February 2026 00:43:28 +0000 (0:00:00.166) 0:00:39.715 ***** 2026-02-28 00:43:31.200229 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '04fa4cbf-2eb3-5c27-a3dd-f7c2dcd9ac18'}})  2026-02-28 00:43:31.200250 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'f45f70cf-4b1a-5b52-bc0a-6a4d28c0a539'}})  2026-02-28 00:43:31.200270 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:43:31.200290 | orchestrator | 2026-02-28 00:43:31.200310 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-02-28 00:43:31.200330 | orchestrator | Saturday 28 February 2026 00:43:28 +0000 (0:00:00.163) 0:00:39.878 ***** 2026-02-28 00:43:31.200351 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '04fa4cbf-2eb3-5c27-a3dd-f7c2dcd9ac18'}})  2026-02-28 00:43:31.200372 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'f45f70cf-4b1a-5b52-bc0a-6a4d28c0a539'}})  2026-02-28 00:43:31.200391 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:43:31.200411 | orchestrator | 2026-02-28 00:43:31.200429 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-02-28 00:43:31.200448 | orchestrator | Saturday 28 February 2026 00:43:28 +0000 (0:00:00.131) 0:00:40.010 ***** 2026-02-28 00:43:31.200467 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:43:31.200486 | orchestrator | 2026-02-28 00:43:31.200505 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-02-28 00:43:31.200524 | orchestrator | Saturday 28 February 2026 00:43:28 +0000 (0:00:00.130) 0:00:40.140 ***** 2026-02-28 00:43:31.200543 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:43:31.200561 | orchestrator | 2026-02-28 00:43:31.200579 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-02-28 00:43:31.200598 | orchestrator | Saturday 28 February 2026 00:43:28 +0000 (0:00:00.142) 0:00:40.283 ***** 2026-02-28 00:43:31.200617 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:43:31.200637 | orchestrator | 2026-02-28 00:43:31.200658 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-02-28 00:43:31.200677 | orchestrator | Saturday 28 February 2026 00:43:29 +0000 (0:00:00.154) 0:00:40.438 ***** 2026-02-28 00:43:31.200695 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:43:31.200714 | orchestrator | 2026-02-28 00:43:31.200732 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-02-28 00:43:31.200752 | orchestrator | Saturday 28 February 2026 00:43:29 +0000 (0:00:00.137) 0:00:40.575 ***** 2026-02-28 00:43:31.200771 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:43:31.200790 | orchestrator | 2026-02-28 00:43:31.200806 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-02-28 00:43:31.200818 | orchestrator | Saturday 28 February 2026 00:43:29 +0000 (0:00:00.149) 0:00:40.725 ***** 2026-02-28 00:43:31.200829 | orchestrator | ok: [testbed-node-5] => { 2026-02-28 00:43:31.200840 | orchestrator |  "ceph_osd_devices": { 2026-02-28 00:43:31.200851 | orchestrator |  "sdb": { 2026-02-28 00:43:31.200884 | orchestrator |  "osd_lvm_uuid": "04fa4cbf-2eb3-5c27-a3dd-f7c2dcd9ac18" 2026-02-28 00:43:31.200896 | orchestrator |  }, 2026-02-28 00:43:31.200908 | orchestrator |  "sdc": { 2026-02-28 00:43:31.200918 | orchestrator |  "osd_lvm_uuid": "f45f70cf-4b1a-5b52-bc0a-6a4d28c0a539" 2026-02-28 00:43:31.200929 | orchestrator |  } 2026-02-28 00:43:31.200941 | orchestrator |  } 2026-02-28 00:43:31.200952 | orchestrator | } 2026-02-28 00:43:31.200963 | orchestrator | 2026-02-28 00:43:31.200985 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-02-28 00:43:31.200997 | orchestrator | Saturday 28 February 2026 00:43:29 +0000 (0:00:00.122) 0:00:40.848 ***** 2026-02-28 00:43:31.201007 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:43:31.201018 | orchestrator | 2026-02-28 00:43:31.201053 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-02-28 00:43:31.201074 | orchestrator | Saturday 28 February 2026 00:43:29 +0000 (0:00:00.120) 0:00:40.968 ***** 2026-02-28 00:43:31.201094 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:43:31.201114 | orchestrator | 2026-02-28 00:43:31.201136 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-02-28 00:43:31.201159 | orchestrator | Saturday 28 February 2026 00:43:29 +0000 (0:00:00.323) 0:00:41.292 ***** 2026-02-28 00:43:31.201182 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:43:31.201204 | orchestrator | 2026-02-28 00:43:31.201223 | orchestrator | TASK [Print configuration data] ************************************************ 2026-02-28 00:43:31.201246 | orchestrator | Saturday 28 February 2026 00:43:30 +0000 (0:00:00.118) 0:00:41.410 ***** 2026-02-28 00:43:31.201266 | orchestrator | changed: [testbed-node-5] => { 2026-02-28 00:43:31.201283 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-02-28 00:43:31.201297 | orchestrator |  "ceph_osd_devices": { 2026-02-28 00:43:31.201310 | orchestrator |  "sdb": { 2026-02-28 00:43:31.201321 | orchestrator |  "osd_lvm_uuid": "04fa4cbf-2eb3-5c27-a3dd-f7c2dcd9ac18" 2026-02-28 00:43:31.201332 | orchestrator |  }, 2026-02-28 00:43:31.201343 | orchestrator |  "sdc": { 2026-02-28 00:43:31.201354 | orchestrator |  "osd_lvm_uuid": "f45f70cf-4b1a-5b52-bc0a-6a4d28c0a539" 2026-02-28 00:43:31.201365 | orchestrator |  } 2026-02-28 00:43:31.201376 | orchestrator |  }, 2026-02-28 00:43:31.201387 | orchestrator |  "lvm_volumes": [ 2026-02-28 00:43:31.201398 | orchestrator |  { 2026-02-28 00:43:31.201409 | orchestrator |  "data": "osd-block-04fa4cbf-2eb3-5c27-a3dd-f7c2dcd9ac18", 2026-02-28 00:43:31.201420 | orchestrator |  "data_vg": "ceph-04fa4cbf-2eb3-5c27-a3dd-f7c2dcd9ac18" 2026-02-28 00:43:31.201431 | orchestrator |  }, 2026-02-28 00:43:31.201446 | orchestrator |  { 2026-02-28 00:43:31.201457 | orchestrator |  "data": "osd-block-f45f70cf-4b1a-5b52-bc0a-6a4d28c0a539", 2026-02-28 00:43:31.201468 | orchestrator |  "data_vg": "ceph-f45f70cf-4b1a-5b52-bc0a-6a4d28c0a539" 2026-02-28 00:43:31.201479 | orchestrator |  } 2026-02-28 00:43:31.201490 | orchestrator |  ] 2026-02-28 00:43:31.201501 | orchestrator |  } 2026-02-28 00:43:31.201512 | orchestrator | } 2026-02-28 00:43:31.201523 | orchestrator | 2026-02-28 00:43:31.201534 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-02-28 00:43:31.201545 | orchestrator | Saturday 28 February 2026 00:43:30 +0000 (0:00:00.204) 0:00:41.615 ***** 2026-02-28 00:43:31.201555 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-02-28 00:43:31.201566 | orchestrator | 2026-02-28 00:43:31.201577 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-28 00:43:31.201588 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-02-28 00:43:31.201600 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-02-28 00:43:31.201612 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-02-28 00:43:31.201623 | orchestrator | 2026-02-28 00:43:31.201633 | orchestrator | 2026-02-28 00:43:31.201644 | orchestrator | 2026-02-28 00:43:31.201655 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-28 00:43:31.201666 | orchestrator | Saturday 28 February 2026 00:43:31 +0000 (0:00:00.911) 0:00:42.527 ***** 2026-02-28 00:43:31.201691 | orchestrator | =============================================================================== 2026-02-28 00:43:31.201711 | orchestrator | Write configuration file ------------------------------------------------ 4.02s 2026-02-28 00:43:31.201729 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 1.25s 2026-02-28 00:43:31.201759 | orchestrator | Add known links to the list of available block devices ------------------ 1.23s 2026-02-28 00:43:31.201779 | orchestrator | Add known partitions to the list of available block devices ------------- 1.17s 2026-02-28 00:43:31.201798 | orchestrator | Add known links to the list of available block devices ------------------ 1.13s 2026-02-28 00:43:31.201818 | orchestrator | Add known partitions to the list of available block devices ------------- 1.08s 2026-02-28 00:43:31.201837 | orchestrator | Print configuration data ------------------------------------------------ 1.01s 2026-02-28 00:43:31.201857 | orchestrator | Add known partitions to the list of available block devices ------------- 0.78s 2026-02-28 00:43:31.201868 | orchestrator | Get initial list of available block devices ----------------------------- 0.74s 2026-02-28 00:43:31.201879 | orchestrator | Generate shared DB/WAL VG names ----------------------------------------- 0.71s 2026-02-28 00:43:31.201890 | orchestrator | Add known links to the list of available block devices ------------------ 0.70s 2026-02-28 00:43:31.201901 | orchestrator | Add known links to the list of available block devices ------------------ 0.70s 2026-02-28 00:43:31.201912 | orchestrator | Generate lvm_volumes structure (block + wal) ---------------------------- 0.70s 2026-02-28 00:43:31.201934 | orchestrator | Add known partitions to the list of available block devices ------------- 0.69s 2026-02-28 00:43:31.482786 | orchestrator | Add known partitions to the list of available block devices ------------- 0.69s 2026-02-28 00:43:31.483469 | orchestrator | Add known partitions to the list of available block devices ------------- 0.64s 2026-02-28 00:43:31.483499 | orchestrator | Set DB devices config data ---------------------------------------------- 0.62s 2026-02-28 00:43:31.483509 | orchestrator | Print DB devices -------------------------------------------------------- 0.60s 2026-02-28 00:43:31.483518 | orchestrator | Add known links to the list of available block devices ------------------ 0.55s 2026-02-28 00:43:31.483528 | orchestrator | Add known links to the list of available block devices ------------------ 0.53s 2026-02-28 00:43:54.027256 | orchestrator | 2026-02-28 00:43:54 | INFO  | Task c2bcddc7-b2b4-4ae2-9387-eda41c5ee970 (sync inventory) is running in background. Output coming soon. 2026-02-28 00:44:20.848090 | orchestrator | 2026-02-28 00:43:55 | INFO  | Starting group_vars file reorganization 2026-02-28 00:44:20.848181 | orchestrator | 2026-02-28 00:43:55 | INFO  | Moved 0 file(s) to their respective directories 2026-02-28 00:44:20.848193 | orchestrator | 2026-02-28 00:43:55 | INFO  | Group_vars file reorganization completed 2026-02-28 00:44:20.848201 | orchestrator | 2026-02-28 00:43:58 | INFO  | Starting variable preparation from inventory 2026-02-28 00:44:20.848208 | orchestrator | 2026-02-28 00:44:01 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2026-02-28 00:44:20.848216 | orchestrator | 2026-02-28 00:44:01 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2026-02-28 00:44:20.848238 | orchestrator | 2026-02-28 00:44:01 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2026-02-28 00:44:20.848245 | orchestrator | 2026-02-28 00:44:01 | INFO  | 3 file(s) written, 6 host(s) processed 2026-02-28 00:44:20.848252 | orchestrator | 2026-02-28 00:44:01 | INFO  | Variable preparation completed 2026-02-28 00:44:20.848259 | orchestrator | 2026-02-28 00:44:03 | INFO  | Starting inventory overwrite handling 2026-02-28 00:44:20.848266 | orchestrator | 2026-02-28 00:44:03 | INFO  | Handling group overwrites in 99-overwrite 2026-02-28 00:44:20.848273 | orchestrator | 2026-02-28 00:44:03 | INFO  | Removing group frr:children from 60-generic 2026-02-28 00:44:20.848302 | orchestrator | 2026-02-28 00:44:03 | INFO  | Removing group netbird:children from 50-infrastructure 2026-02-28 00:44:20.848309 | orchestrator | 2026-02-28 00:44:03 | INFO  | Removing group ceph-rgw from 50-ceph 2026-02-28 00:44:20.848317 | orchestrator | 2026-02-28 00:44:03 | INFO  | Removing group ceph-mds from 50-ceph 2026-02-28 00:44:20.848323 | orchestrator | 2026-02-28 00:44:03 | INFO  | Handling group overwrites in 20-roles 2026-02-28 00:44:20.848330 | orchestrator | 2026-02-28 00:44:03 | INFO  | Removing group k3s_node from 50-infrastructure 2026-02-28 00:44:20.848337 | orchestrator | 2026-02-28 00:44:03 | INFO  | Removed 5 group(s) in total 2026-02-28 00:44:20.848344 | orchestrator | 2026-02-28 00:44:03 | INFO  | Inventory overwrite handling completed 2026-02-28 00:44:20.848350 | orchestrator | 2026-02-28 00:44:04 | INFO  | Starting merge of inventory files 2026-02-28 00:44:20.848357 | orchestrator | 2026-02-28 00:44:04 | INFO  | Inventory files merged successfully 2026-02-28 00:44:20.848364 | orchestrator | 2026-02-28 00:44:08 | INFO  | Generating ClusterShell configuration from Ansible inventory 2026-02-28 00:44:20.848370 | orchestrator | 2026-02-28 00:44:19 | INFO  | Successfully wrote ClusterShell configuration 2026-02-28 00:44:20.848377 | orchestrator | [master 539f352] 2026-02-28-00-44 2026-02-28 00:44:20.848385 | orchestrator | 1 file changed, 30 insertions(+), 9 deletions(-) 2026-02-28 00:44:23.315114 | orchestrator | 2026-02-28 00:44:23 | INFO  | Prepare task for execution of ceph-create-lvm-devices. 2026-02-28 00:44:23.383122 | orchestrator | 2026-02-28 00:44:23 | INFO  | Task e2f698e2-ab68-407c-97eb-2f72a470617d (ceph-create-lvm-devices) was prepared for execution. 2026-02-28 00:44:23.383211 | orchestrator | 2026-02-28 00:44:23 | INFO  | It takes a moment until task e2f698e2-ab68-407c-97eb-2f72a470617d (ceph-create-lvm-devices) has been started and output is visible here. 2026-02-28 00:44:35.853405 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-02-28 00:44:35.853499 | orchestrator | 2.16.14 2026-02-28 00:44:35.853511 | orchestrator | 2026-02-28 00:44:35.853518 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-02-28 00:44:35.853526 | orchestrator | 2026-02-28 00:44:35.853533 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-02-28 00:44:35.853540 | orchestrator | Saturday 28 February 2026 00:44:27 +0000 (0:00:00.305) 0:00:00.305 ***** 2026-02-28 00:44:35.853548 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-02-28 00:44:35.853556 | orchestrator | 2026-02-28 00:44:35.853563 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-02-28 00:44:35.853570 | orchestrator | Saturday 28 February 2026 00:44:28 +0000 (0:00:00.274) 0:00:00.580 ***** 2026-02-28 00:44:35.853580 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:44:35.853591 | orchestrator | 2026-02-28 00:44:35.853601 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:44:35.853609 | orchestrator | Saturday 28 February 2026 00:44:28 +0000 (0:00:00.241) 0:00:00.821 ***** 2026-02-28 00:44:35.853615 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-02-28 00:44:35.853623 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-02-28 00:44:35.853631 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-02-28 00:44:35.853641 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-02-28 00:44:35.853651 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-02-28 00:44:35.853658 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-02-28 00:44:35.853664 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-02-28 00:44:35.853714 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-02-28 00:44:35.853722 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-02-28 00:44:35.853729 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-02-28 00:44:35.853735 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-02-28 00:44:35.853742 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-02-28 00:44:35.853749 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-02-28 00:44:35.853756 | orchestrator | 2026-02-28 00:44:35.853762 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:44:35.853769 | orchestrator | Saturday 28 February 2026 00:44:29 +0000 (0:00:00.517) 0:00:01.339 ***** 2026-02-28 00:44:35.853775 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:44:35.853781 | orchestrator | 2026-02-28 00:44:35.853787 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:44:35.853793 | orchestrator | Saturday 28 February 2026 00:44:29 +0000 (0:00:00.278) 0:00:01.618 ***** 2026-02-28 00:44:35.853800 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:44:35.853806 | orchestrator | 2026-02-28 00:44:35.853813 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:44:35.853819 | orchestrator | Saturday 28 February 2026 00:44:29 +0000 (0:00:00.174) 0:00:01.792 ***** 2026-02-28 00:44:35.853826 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:44:35.853832 | orchestrator | 2026-02-28 00:44:35.853839 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:44:35.853845 | orchestrator | Saturday 28 February 2026 00:44:29 +0000 (0:00:00.242) 0:00:02.034 ***** 2026-02-28 00:44:35.853852 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:44:35.853859 | orchestrator | 2026-02-28 00:44:35.853865 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:44:35.853871 | orchestrator | Saturday 28 February 2026 00:44:29 +0000 (0:00:00.207) 0:00:02.242 ***** 2026-02-28 00:44:35.853881 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:44:35.853887 | orchestrator | 2026-02-28 00:44:35.853893 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:44:35.853917 | orchestrator | Saturday 28 February 2026 00:44:30 +0000 (0:00:00.216) 0:00:02.459 ***** 2026-02-28 00:44:35.853923 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:44:35.853929 | orchestrator | 2026-02-28 00:44:35.853935 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:44:35.853942 | orchestrator | Saturday 28 February 2026 00:44:30 +0000 (0:00:00.213) 0:00:02.673 ***** 2026-02-28 00:44:35.853948 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:44:35.853954 | orchestrator | 2026-02-28 00:44:35.853961 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:44:35.853967 | orchestrator | Saturday 28 February 2026 00:44:30 +0000 (0:00:00.216) 0:00:02.890 ***** 2026-02-28 00:44:35.853974 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:44:35.853980 | orchestrator | 2026-02-28 00:44:35.853987 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:44:35.853993 | orchestrator | Saturday 28 February 2026 00:44:30 +0000 (0:00:00.225) 0:00:03.116 ***** 2026-02-28 00:44:35.854000 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_84e1ce59-bd95-40da-9f03-5819b7d1b103) 2026-02-28 00:44:35.854074 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_84e1ce59-bd95-40da-9f03-5819b7d1b103) 2026-02-28 00:44:35.854082 | orchestrator | 2026-02-28 00:44:35.854088 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:44:35.854114 | orchestrator | Saturday 28 February 2026 00:44:31 +0000 (0:00:00.457) 0:00:03.573 ***** 2026-02-28 00:44:35.854131 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_9a1bcb93-f154-4a17-8f9d-a00d049f4cc1) 2026-02-28 00:44:35.854138 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_9a1bcb93-f154-4a17-8f9d-a00d049f4cc1) 2026-02-28 00:44:35.854145 | orchestrator | 2026-02-28 00:44:35.854151 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:44:35.854158 | orchestrator | Saturday 28 February 2026 00:44:31 +0000 (0:00:00.667) 0:00:04.241 ***** 2026-02-28 00:44:35.854165 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_16fcc6e7-951a-43ed-8f3a-017ae19ace76) 2026-02-28 00:44:35.854171 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_16fcc6e7-951a-43ed-8f3a-017ae19ace76) 2026-02-28 00:44:35.854178 | orchestrator | 2026-02-28 00:44:35.854184 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:44:35.854190 | orchestrator | Saturday 28 February 2026 00:44:32 +0000 (0:00:00.648) 0:00:04.889 ***** 2026-02-28 00:44:35.854197 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_afb4b4ce-eec7-46b2-91b5-87577cac503b) 2026-02-28 00:44:35.854203 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_afb4b4ce-eec7-46b2-91b5-87577cac503b) 2026-02-28 00:44:35.854210 | orchestrator | 2026-02-28 00:44:35.854217 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:44:35.854223 | orchestrator | Saturday 28 February 2026 00:44:33 +0000 (0:00:00.887) 0:00:05.777 ***** 2026-02-28 00:44:35.854230 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-02-28 00:44:35.854236 | orchestrator | 2026-02-28 00:44:35.854243 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:44:35.854248 | orchestrator | Saturday 28 February 2026 00:44:33 +0000 (0:00:00.362) 0:00:06.139 ***** 2026-02-28 00:44:35.854255 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-02-28 00:44:35.854262 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-02-28 00:44:35.854269 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-02-28 00:44:35.854276 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-02-28 00:44:35.854284 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-02-28 00:44:35.854297 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-02-28 00:44:35.854304 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-02-28 00:44:35.854310 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-02-28 00:44:35.854316 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-02-28 00:44:35.854323 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-02-28 00:44:35.854329 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-02-28 00:44:35.854336 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-02-28 00:44:35.854342 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-02-28 00:44:35.854349 | orchestrator | 2026-02-28 00:44:35.854356 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:44:35.854362 | orchestrator | Saturday 28 February 2026 00:44:34 +0000 (0:00:00.476) 0:00:06.616 ***** 2026-02-28 00:44:35.854368 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:44:35.854375 | orchestrator | 2026-02-28 00:44:35.854381 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:44:35.854388 | orchestrator | Saturday 28 February 2026 00:44:34 +0000 (0:00:00.220) 0:00:06.837 ***** 2026-02-28 00:44:35.854403 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:44:35.854409 | orchestrator | 2026-02-28 00:44:35.854418 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:44:35.854425 | orchestrator | Saturday 28 February 2026 00:44:34 +0000 (0:00:00.215) 0:00:07.052 ***** 2026-02-28 00:44:35.854432 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:44:35.854439 | orchestrator | 2026-02-28 00:44:35.854446 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:44:35.854453 | orchestrator | Saturday 28 February 2026 00:44:34 +0000 (0:00:00.236) 0:00:07.289 ***** 2026-02-28 00:44:35.854459 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:44:35.854466 | orchestrator | 2026-02-28 00:44:35.854472 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:44:35.854479 | orchestrator | Saturday 28 February 2026 00:44:35 +0000 (0:00:00.236) 0:00:07.526 ***** 2026-02-28 00:44:35.854486 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:44:35.854493 | orchestrator | 2026-02-28 00:44:35.854499 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:44:35.854505 | orchestrator | Saturday 28 February 2026 00:44:35 +0000 (0:00:00.199) 0:00:07.725 ***** 2026-02-28 00:44:35.854512 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:44:35.854518 | orchestrator | 2026-02-28 00:44:35.854525 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:44:35.854532 | orchestrator | Saturday 28 February 2026 00:44:35 +0000 (0:00:00.243) 0:00:07.968 ***** 2026-02-28 00:44:35.854539 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:44:35.854546 | orchestrator | 2026-02-28 00:44:35.854561 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:44:44.604224 | orchestrator | Saturday 28 February 2026 00:44:35 +0000 (0:00:00.206) 0:00:08.174 ***** 2026-02-28 00:44:44.604311 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:44:44.604322 | orchestrator | 2026-02-28 00:44:44.604331 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:44:44.604339 | orchestrator | Saturday 28 February 2026 00:44:36 +0000 (0:00:00.211) 0:00:08.386 ***** 2026-02-28 00:44:44.604346 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-02-28 00:44:44.604354 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-02-28 00:44:44.604362 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-02-28 00:44:44.604369 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-02-28 00:44:44.604375 | orchestrator | 2026-02-28 00:44:44.604382 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:44:44.604389 | orchestrator | Saturday 28 February 2026 00:44:37 +0000 (0:00:01.138) 0:00:09.524 ***** 2026-02-28 00:44:44.604408 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:44:44.604416 | orchestrator | 2026-02-28 00:44:44.604423 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:44:44.604430 | orchestrator | Saturday 28 February 2026 00:44:37 +0000 (0:00:00.209) 0:00:09.734 ***** 2026-02-28 00:44:44.604436 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:44:44.604443 | orchestrator | 2026-02-28 00:44:44.604450 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:44:44.604456 | orchestrator | Saturday 28 February 2026 00:44:37 +0000 (0:00:00.203) 0:00:09.937 ***** 2026-02-28 00:44:44.604463 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:44:44.604470 | orchestrator | 2026-02-28 00:44:44.604476 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:44:44.604483 | orchestrator | Saturday 28 February 2026 00:44:37 +0000 (0:00:00.194) 0:00:10.132 ***** 2026-02-28 00:44:44.604490 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:44:44.604496 | orchestrator | 2026-02-28 00:44:44.604507 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-02-28 00:44:44.604518 | orchestrator | Saturday 28 February 2026 00:44:38 +0000 (0:00:00.206) 0:00:10.339 ***** 2026-02-28 00:44:44.604529 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:44:44.604576 | orchestrator | 2026-02-28 00:44:44.604585 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-02-28 00:44:44.604601 | orchestrator | Saturday 28 February 2026 00:44:38 +0000 (0:00:00.163) 0:00:10.502 ***** 2026-02-28 00:44:44.604613 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '4d18609e-ecdb-578d-a05b-e7913934f080'}}) 2026-02-28 00:44:44.604636 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'dcf33d59-3ae6-5017-b2aa-1b02884ceea7'}}) 2026-02-28 00:44:44.604647 | orchestrator | 2026-02-28 00:44:44.604658 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-02-28 00:44:44.604669 | orchestrator | Saturday 28 February 2026 00:44:38 +0000 (0:00:00.260) 0:00:10.763 ***** 2026-02-28 00:44:44.604682 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-4d18609e-ecdb-578d-a05b-e7913934f080', 'data_vg': 'ceph-4d18609e-ecdb-578d-a05b-e7913934f080'}) 2026-02-28 00:44:44.604696 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-dcf33d59-3ae6-5017-b2aa-1b02884ceea7', 'data_vg': 'ceph-dcf33d59-3ae6-5017-b2aa-1b02884ceea7'}) 2026-02-28 00:44:44.604706 | orchestrator | 2026-02-28 00:44:44.604718 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-02-28 00:44:44.604725 | orchestrator | Saturday 28 February 2026 00:44:40 +0000 (0:00:02.241) 0:00:13.004 ***** 2026-02-28 00:44:44.604732 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4d18609e-ecdb-578d-a05b-e7913934f080', 'data_vg': 'ceph-4d18609e-ecdb-578d-a05b-e7913934f080'})  2026-02-28 00:44:44.604740 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-dcf33d59-3ae6-5017-b2aa-1b02884ceea7', 'data_vg': 'ceph-dcf33d59-3ae6-5017-b2aa-1b02884ceea7'})  2026-02-28 00:44:44.604751 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:44:44.604762 | orchestrator | 2026-02-28 00:44:44.604774 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-02-28 00:44:44.604785 | orchestrator | Saturday 28 February 2026 00:44:40 +0000 (0:00:00.259) 0:00:13.264 ***** 2026-02-28 00:44:44.604796 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-4d18609e-ecdb-578d-a05b-e7913934f080', 'data_vg': 'ceph-4d18609e-ecdb-578d-a05b-e7913934f080'}) 2026-02-28 00:44:44.604808 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-dcf33d59-3ae6-5017-b2aa-1b02884ceea7', 'data_vg': 'ceph-dcf33d59-3ae6-5017-b2aa-1b02884ceea7'}) 2026-02-28 00:44:44.604819 | orchestrator | 2026-02-28 00:44:44.604850 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-02-28 00:44:44.604858 | orchestrator | Saturday 28 February 2026 00:44:42 +0000 (0:00:01.490) 0:00:14.755 ***** 2026-02-28 00:44:44.604865 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4d18609e-ecdb-578d-a05b-e7913934f080', 'data_vg': 'ceph-4d18609e-ecdb-578d-a05b-e7913934f080'})  2026-02-28 00:44:44.604872 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-dcf33d59-3ae6-5017-b2aa-1b02884ceea7', 'data_vg': 'ceph-dcf33d59-3ae6-5017-b2aa-1b02884ceea7'})  2026-02-28 00:44:44.604878 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:44:44.604885 | orchestrator | 2026-02-28 00:44:44.604892 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-02-28 00:44:44.604898 | orchestrator | Saturday 28 February 2026 00:44:42 +0000 (0:00:00.149) 0:00:14.904 ***** 2026-02-28 00:44:44.604921 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:44:44.604929 | orchestrator | 2026-02-28 00:44:44.604936 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-02-28 00:44:44.604943 | orchestrator | Saturday 28 February 2026 00:44:42 +0000 (0:00:00.137) 0:00:15.042 ***** 2026-02-28 00:44:44.604950 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4d18609e-ecdb-578d-a05b-e7913934f080', 'data_vg': 'ceph-4d18609e-ecdb-578d-a05b-e7913934f080'})  2026-02-28 00:44:44.604958 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-dcf33d59-3ae6-5017-b2aa-1b02884ceea7', 'data_vg': 'ceph-dcf33d59-3ae6-5017-b2aa-1b02884ceea7'})  2026-02-28 00:44:44.604980 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:44:44.604991 | orchestrator | 2026-02-28 00:44:44.605035 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-02-28 00:44:44.605059 | orchestrator | Saturday 28 February 2026 00:44:43 +0000 (0:00:00.407) 0:00:15.450 ***** 2026-02-28 00:44:44.605071 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:44:44.605083 | orchestrator | 2026-02-28 00:44:44.605094 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-02-28 00:44:44.605106 | orchestrator | Saturday 28 February 2026 00:44:43 +0000 (0:00:00.147) 0:00:15.597 ***** 2026-02-28 00:44:44.605113 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4d18609e-ecdb-578d-a05b-e7913934f080', 'data_vg': 'ceph-4d18609e-ecdb-578d-a05b-e7913934f080'})  2026-02-28 00:44:44.605120 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-dcf33d59-3ae6-5017-b2aa-1b02884ceea7', 'data_vg': 'ceph-dcf33d59-3ae6-5017-b2aa-1b02884ceea7'})  2026-02-28 00:44:44.605127 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:44:44.605133 | orchestrator | 2026-02-28 00:44:44.605140 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-02-28 00:44:44.605147 | orchestrator | Saturday 28 February 2026 00:44:43 +0000 (0:00:00.169) 0:00:15.766 ***** 2026-02-28 00:44:44.605154 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:44:44.605161 | orchestrator | 2026-02-28 00:44:44.605167 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-02-28 00:44:44.605174 | orchestrator | Saturday 28 February 2026 00:44:43 +0000 (0:00:00.163) 0:00:15.930 ***** 2026-02-28 00:44:44.605181 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4d18609e-ecdb-578d-a05b-e7913934f080', 'data_vg': 'ceph-4d18609e-ecdb-578d-a05b-e7913934f080'})  2026-02-28 00:44:44.605193 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-dcf33d59-3ae6-5017-b2aa-1b02884ceea7', 'data_vg': 'ceph-dcf33d59-3ae6-5017-b2aa-1b02884ceea7'})  2026-02-28 00:44:44.605200 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:44:44.605207 | orchestrator | 2026-02-28 00:44:44.605214 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-02-28 00:44:44.605220 | orchestrator | Saturday 28 February 2026 00:44:43 +0000 (0:00:00.178) 0:00:16.108 ***** 2026-02-28 00:44:44.605227 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:44:44.605234 | orchestrator | 2026-02-28 00:44:44.605241 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-02-28 00:44:44.605248 | orchestrator | Saturday 28 February 2026 00:44:43 +0000 (0:00:00.145) 0:00:16.254 ***** 2026-02-28 00:44:44.605258 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4d18609e-ecdb-578d-a05b-e7913934f080', 'data_vg': 'ceph-4d18609e-ecdb-578d-a05b-e7913934f080'})  2026-02-28 00:44:44.605269 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-dcf33d59-3ae6-5017-b2aa-1b02884ceea7', 'data_vg': 'ceph-dcf33d59-3ae6-5017-b2aa-1b02884ceea7'})  2026-02-28 00:44:44.605295 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:44:44.605312 | orchestrator | 2026-02-28 00:44:44.605319 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-02-28 00:44:44.605326 | orchestrator | Saturday 28 February 2026 00:44:44 +0000 (0:00:00.202) 0:00:16.457 ***** 2026-02-28 00:44:44.605333 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4d18609e-ecdb-578d-a05b-e7913934f080', 'data_vg': 'ceph-4d18609e-ecdb-578d-a05b-e7913934f080'})  2026-02-28 00:44:44.605340 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-dcf33d59-3ae6-5017-b2aa-1b02884ceea7', 'data_vg': 'ceph-dcf33d59-3ae6-5017-b2aa-1b02884ceea7'})  2026-02-28 00:44:44.605346 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:44:44.605353 | orchestrator | 2026-02-28 00:44:44.605359 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-02-28 00:44:44.605381 | orchestrator | Saturday 28 February 2026 00:44:44 +0000 (0:00:00.156) 0:00:16.613 ***** 2026-02-28 00:44:44.605393 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4d18609e-ecdb-578d-a05b-e7913934f080', 'data_vg': 'ceph-4d18609e-ecdb-578d-a05b-e7913934f080'})  2026-02-28 00:44:44.605406 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-dcf33d59-3ae6-5017-b2aa-1b02884ceea7', 'data_vg': 'ceph-dcf33d59-3ae6-5017-b2aa-1b02884ceea7'})  2026-02-28 00:44:44.605418 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:44:44.605431 | orchestrator | 2026-02-28 00:44:44.605443 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-02-28 00:44:44.605452 | orchestrator | Saturday 28 February 2026 00:44:44 +0000 (0:00:00.175) 0:00:16.789 ***** 2026-02-28 00:44:44.605459 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:44:44.605466 | orchestrator | 2026-02-28 00:44:44.605472 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-02-28 00:44:44.605487 | orchestrator | Saturday 28 February 2026 00:44:44 +0000 (0:00:00.137) 0:00:16.927 ***** 2026-02-28 00:44:51.926433 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:44:51.926609 | orchestrator | 2026-02-28 00:44:51.926632 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-02-28 00:44:51.926648 | orchestrator | Saturday 28 February 2026 00:44:44 +0000 (0:00:00.137) 0:00:17.065 ***** 2026-02-28 00:44:51.926662 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:44:51.926674 | orchestrator | 2026-02-28 00:44:51.926688 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-02-28 00:44:51.926701 | orchestrator | Saturday 28 February 2026 00:44:44 +0000 (0:00:00.137) 0:00:17.202 ***** 2026-02-28 00:44:51.926715 | orchestrator | ok: [testbed-node-3] => { 2026-02-28 00:44:51.926729 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-02-28 00:44:51.926745 | orchestrator | } 2026-02-28 00:44:51.926758 | orchestrator | 2026-02-28 00:44:51.926771 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-02-28 00:44:51.926784 | orchestrator | Saturday 28 February 2026 00:44:45 +0000 (0:00:00.371) 0:00:17.573 ***** 2026-02-28 00:44:51.926797 | orchestrator | ok: [testbed-node-3] => { 2026-02-28 00:44:51.926811 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-02-28 00:44:51.926824 | orchestrator | } 2026-02-28 00:44:51.926839 | orchestrator | 2026-02-28 00:44:51.926852 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-02-28 00:44:51.926865 | orchestrator | Saturday 28 February 2026 00:44:45 +0000 (0:00:00.145) 0:00:17.719 ***** 2026-02-28 00:44:51.926878 | orchestrator | ok: [testbed-node-3] => { 2026-02-28 00:44:51.926892 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-02-28 00:44:51.926906 | orchestrator | } 2026-02-28 00:44:51.926918 | orchestrator | 2026-02-28 00:44:51.926932 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-02-28 00:44:51.926945 | orchestrator | Saturday 28 February 2026 00:44:45 +0000 (0:00:00.150) 0:00:17.869 ***** 2026-02-28 00:44:51.926958 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:44:51.926974 | orchestrator | 2026-02-28 00:44:51.926988 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-02-28 00:44:51.927040 | orchestrator | Saturday 28 February 2026 00:44:46 +0000 (0:00:00.770) 0:00:18.640 ***** 2026-02-28 00:44:51.927050 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:44:51.927058 | orchestrator | 2026-02-28 00:44:51.927067 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-02-28 00:44:51.927075 | orchestrator | Saturday 28 February 2026 00:44:46 +0000 (0:00:00.578) 0:00:19.219 ***** 2026-02-28 00:44:51.927083 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:44:51.927092 | orchestrator | 2026-02-28 00:44:51.927100 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-02-28 00:44:51.927109 | orchestrator | Saturday 28 February 2026 00:44:47 +0000 (0:00:00.583) 0:00:19.802 ***** 2026-02-28 00:44:51.927117 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:44:51.927126 | orchestrator | 2026-02-28 00:44:51.927152 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-02-28 00:44:51.927161 | orchestrator | Saturday 28 February 2026 00:44:47 +0000 (0:00:00.225) 0:00:20.028 ***** 2026-02-28 00:44:51.927169 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:44:51.927177 | orchestrator | 2026-02-28 00:44:51.927186 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-02-28 00:44:51.927194 | orchestrator | Saturday 28 February 2026 00:44:47 +0000 (0:00:00.146) 0:00:20.174 ***** 2026-02-28 00:44:51.927202 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:44:51.927209 | orchestrator | 2026-02-28 00:44:51.927217 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-02-28 00:44:51.927224 | orchestrator | Saturday 28 February 2026 00:44:47 +0000 (0:00:00.135) 0:00:20.309 ***** 2026-02-28 00:44:51.927231 | orchestrator | ok: [testbed-node-3] => { 2026-02-28 00:44:51.927238 | orchestrator |  "vgs_report": { 2026-02-28 00:44:51.927246 | orchestrator |  "vg": [] 2026-02-28 00:44:51.927254 | orchestrator |  } 2026-02-28 00:44:51.927261 | orchestrator | } 2026-02-28 00:44:51.927268 | orchestrator | 2026-02-28 00:44:51.927276 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-02-28 00:44:51.927283 | orchestrator | Saturday 28 February 2026 00:44:48 +0000 (0:00:00.150) 0:00:20.459 ***** 2026-02-28 00:44:51.927290 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:44:51.927297 | orchestrator | 2026-02-28 00:44:51.927304 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-02-28 00:44:51.927312 | orchestrator | Saturday 28 February 2026 00:44:48 +0000 (0:00:00.147) 0:00:20.606 ***** 2026-02-28 00:44:51.927319 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:44:51.927326 | orchestrator | 2026-02-28 00:44:51.927333 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-02-28 00:44:51.927340 | orchestrator | Saturday 28 February 2026 00:44:48 +0000 (0:00:00.145) 0:00:20.752 ***** 2026-02-28 00:44:51.927348 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:44:51.927355 | orchestrator | 2026-02-28 00:44:51.927362 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-02-28 00:44:51.927369 | orchestrator | Saturday 28 February 2026 00:44:48 +0000 (0:00:00.364) 0:00:21.116 ***** 2026-02-28 00:44:51.927376 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:44:51.927383 | orchestrator | 2026-02-28 00:44:51.927390 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-02-28 00:44:51.927398 | orchestrator | Saturday 28 February 2026 00:44:48 +0000 (0:00:00.179) 0:00:21.296 ***** 2026-02-28 00:44:51.927405 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:44:51.927412 | orchestrator | 2026-02-28 00:44:51.927419 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-02-28 00:44:51.927426 | orchestrator | Saturday 28 February 2026 00:44:49 +0000 (0:00:00.160) 0:00:21.456 ***** 2026-02-28 00:44:51.927433 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:44:51.927440 | orchestrator | 2026-02-28 00:44:51.927447 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-02-28 00:44:51.927454 | orchestrator | Saturday 28 February 2026 00:44:49 +0000 (0:00:00.170) 0:00:21.626 ***** 2026-02-28 00:44:51.927461 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:44:51.927469 | orchestrator | 2026-02-28 00:44:51.927476 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-02-28 00:44:51.927483 | orchestrator | Saturday 28 February 2026 00:44:49 +0000 (0:00:00.170) 0:00:21.796 ***** 2026-02-28 00:44:51.927508 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:44:51.927516 | orchestrator | 2026-02-28 00:44:51.927524 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-02-28 00:44:51.927531 | orchestrator | Saturday 28 February 2026 00:44:49 +0000 (0:00:00.215) 0:00:22.012 ***** 2026-02-28 00:44:51.927538 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:44:51.927545 | orchestrator | 2026-02-28 00:44:51.927552 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-02-28 00:44:51.927567 | orchestrator | Saturday 28 February 2026 00:44:49 +0000 (0:00:00.190) 0:00:22.203 ***** 2026-02-28 00:44:51.927574 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:44:51.927581 | orchestrator | 2026-02-28 00:44:51.927589 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-02-28 00:44:51.927596 | orchestrator | Saturday 28 February 2026 00:44:50 +0000 (0:00:00.156) 0:00:22.359 ***** 2026-02-28 00:44:51.927603 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:44:51.927610 | orchestrator | 2026-02-28 00:44:51.927633 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-02-28 00:44:51.927640 | orchestrator | Saturday 28 February 2026 00:44:50 +0000 (0:00:00.154) 0:00:22.513 ***** 2026-02-28 00:44:51.927647 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:44:51.927655 | orchestrator | 2026-02-28 00:44:51.927662 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-02-28 00:44:51.927669 | orchestrator | Saturday 28 February 2026 00:44:50 +0000 (0:00:00.145) 0:00:22.659 ***** 2026-02-28 00:44:51.927676 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:44:51.927683 | orchestrator | 2026-02-28 00:44:51.927690 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-02-28 00:44:51.927698 | orchestrator | Saturday 28 February 2026 00:44:50 +0000 (0:00:00.154) 0:00:22.813 ***** 2026-02-28 00:44:51.927705 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:44:51.927712 | orchestrator | 2026-02-28 00:44:51.927719 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-02-28 00:44:51.927726 | orchestrator | Saturday 28 February 2026 00:44:50 +0000 (0:00:00.141) 0:00:22.955 ***** 2026-02-28 00:44:51.927735 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4d18609e-ecdb-578d-a05b-e7913934f080', 'data_vg': 'ceph-4d18609e-ecdb-578d-a05b-e7913934f080'})  2026-02-28 00:44:51.927744 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-dcf33d59-3ae6-5017-b2aa-1b02884ceea7', 'data_vg': 'ceph-dcf33d59-3ae6-5017-b2aa-1b02884ceea7'})  2026-02-28 00:44:51.927751 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:44:51.927758 | orchestrator | 2026-02-28 00:44:51.927766 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-02-28 00:44:51.927777 | orchestrator | Saturday 28 February 2026 00:44:51 +0000 (0:00:00.545) 0:00:23.501 ***** 2026-02-28 00:44:51.927784 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4d18609e-ecdb-578d-a05b-e7913934f080', 'data_vg': 'ceph-4d18609e-ecdb-578d-a05b-e7913934f080'})  2026-02-28 00:44:51.927792 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-dcf33d59-3ae6-5017-b2aa-1b02884ceea7', 'data_vg': 'ceph-dcf33d59-3ae6-5017-b2aa-1b02884ceea7'})  2026-02-28 00:44:51.927799 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:44:51.927806 | orchestrator | 2026-02-28 00:44:51.927813 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-02-28 00:44:51.927821 | orchestrator | Saturday 28 February 2026 00:44:51 +0000 (0:00:00.171) 0:00:23.672 ***** 2026-02-28 00:44:51.927828 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4d18609e-ecdb-578d-a05b-e7913934f080', 'data_vg': 'ceph-4d18609e-ecdb-578d-a05b-e7913934f080'})  2026-02-28 00:44:51.927835 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-dcf33d59-3ae6-5017-b2aa-1b02884ceea7', 'data_vg': 'ceph-dcf33d59-3ae6-5017-b2aa-1b02884ceea7'})  2026-02-28 00:44:51.927843 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:44:51.927850 | orchestrator | 2026-02-28 00:44:51.927857 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-02-28 00:44:51.927864 | orchestrator | Saturday 28 February 2026 00:44:51 +0000 (0:00:00.170) 0:00:23.842 ***** 2026-02-28 00:44:51.927871 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4d18609e-ecdb-578d-a05b-e7913934f080', 'data_vg': 'ceph-4d18609e-ecdb-578d-a05b-e7913934f080'})  2026-02-28 00:44:51.927879 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-dcf33d59-3ae6-5017-b2aa-1b02884ceea7', 'data_vg': 'ceph-dcf33d59-3ae6-5017-b2aa-1b02884ceea7'})  2026-02-28 00:44:51.927892 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:44:51.927899 | orchestrator | 2026-02-28 00:44:51.927906 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-02-28 00:44:51.927914 | orchestrator | Saturday 28 February 2026 00:44:51 +0000 (0:00:00.178) 0:00:24.020 ***** 2026-02-28 00:44:51.927921 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4d18609e-ecdb-578d-a05b-e7913934f080', 'data_vg': 'ceph-4d18609e-ecdb-578d-a05b-e7913934f080'})  2026-02-28 00:44:51.927928 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-dcf33d59-3ae6-5017-b2aa-1b02884ceea7', 'data_vg': 'ceph-dcf33d59-3ae6-5017-b2aa-1b02884ceea7'})  2026-02-28 00:44:51.927935 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:44:51.927943 | orchestrator | 2026-02-28 00:44:51.927950 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-02-28 00:44:51.927957 | orchestrator | Saturday 28 February 2026 00:44:51 +0000 (0:00:00.168) 0:00:24.189 ***** 2026-02-28 00:44:51.927970 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4d18609e-ecdb-578d-a05b-e7913934f080', 'data_vg': 'ceph-4d18609e-ecdb-578d-a05b-e7913934f080'})  2026-02-28 00:44:57.689574 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-dcf33d59-3ae6-5017-b2aa-1b02884ceea7', 'data_vg': 'ceph-dcf33d59-3ae6-5017-b2aa-1b02884ceea7'})  2026-02-28 00:44:57.689689 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:44:57.689707 | orchestrator | 2026-02-28 00:44:57.689720 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-02-28 00:44:57.689734 | orchestrator | Saturday 28 February 2026 00:44:52 +0000 (0:00:00.170) 0:00:24.360 ***** 2026-02-28 00:44:57.689745 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4d18609e-ecdb-578d-a05b-e7913934f080', 'data_vg': 'ceph-4d18609e-ecdb-578d-a05b-e7913934f080'})  2026-02-28 00:44:57.689757 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-dcf33d59-3ae6-5017-b2aa-1b02884ceea7', 'data_vg': 'ceph-dcf33d59-3ae6-5017-b2aa-1b02884ceea7'})  2026-02-28 00:44:57.689769 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:44:57.689780 | orchestrator | 2026-02-28 00:44:57.689791 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-02-28 00:44:57.689802 | orchestrator | Saturday 28 February 2026 00:44:52 +0000 (0:00:00.169) 0:00:24.530 ***** 2026-02-28 00:44:57.689813 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4d18609e-ecdb-578d-a05b-e7913934f080', 'data_vg': 'ceph-4d18609e-ecdb-578d-a05b-e7913934f080'})  2026-02-28 00:44:57.689824 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-dcf33d59-3ae6-5017-b2aa-1b02884ceea7', 'data_vg': 'ceph-dcf33d59-3ae6-5017-b2aa-1b02884ceea7'})  2026-02-28 00:44:57.689835 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:44:57.689846 | orchestrator | 2026-02-28 00:44:57.689857 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-02-28 00:44:57.689868 | orchestrator | Saturday 28 February 2026 00:44:52 +0000 (0:00:00.171) 0:00:24.701 ***** 2026-02-28 00:44:57.689879 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:44:57.689891 | orchestrator | 2026-02-28 00:44:57.689902 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-02-28 00:44:57.689913 | orchestrator | Saturday 28 February 2026 00:44:52 +0000 (0:00:00.535) 0:00:25.237 ***** 2026-02-28 00:44:57.689924 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:44:57.689934 | orchestrator | 2026-02-28 00:44:57.689945 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-02-28 00:44:57.689973 | orchestrator | Saturday 28 February 2026 00:44:53 +0000 (0:00:00.563) 0:00:25.800 ***** 2026-02-28 00:44:57.689984 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:44:57.690131 | orchestrator | 2026-02-28 00:44:57.690152 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-02-28 00:44:57.690165 | orchestrator | Saturday 28 February 2026 00:44:53 +0000 (0:00:00.168) 0:00:25.968 ***** 2026-02-28 00:44:57.690206 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-4d18609e-ecdb-578d-a05b-e7913934f080', 'vg_name': 'ceph-4d18609e-ecdb-578d-a05b-e7913934f080'}) 2026-02-28 00:44:57.690222 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-dcf33d59-3ae6-5017-b2aa-1b02884ceea7', 'vg_name': 'ceph-dcf33d59-3ae6-5017-b2aa-1b02884ceea7'}) 2026-02-28 00:44:57.690234 | orchestrator | 2026-02-28 00:44:57.690247 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-02-28 00:44:57.690259 | orchestrator | Saturday 28 February 2026 00:44:53 +0000 (0:00:00.222) 0:00:26.191 ***** 2026-02-28 00:44:57.690272 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4d18609e-ecdb-578d-a05b-e7913934f080', 'data_vg': 'ceph-4d18609e-ecdb-578d-a05b-e7913934f080'})  2026-02-28 00:44:57.690284 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-dcf33d59-3ae6-5017-b2aa-1b02884ceea7', 'data_vg': 'ceph-dcf33d59-3ae6-5017-b2aa-1b02884ceea7'})  2026-02-28 00:44:57.690296 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:44:57.690308 | orchestrator | 2026-02-28 00:44:57.690321 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-02-28 00:44:57.690333 | orchestrator | Saturday 28 February 2026 00:44:54 +0000 (0:00:00.412) 0:00:26.603 ***** 2026-02-28 00:44:57.690345 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4d18609e-ecdb-578d-a05b-e7913934f080', 'data_vg': 'ceph-4d18609e-ecdb-578d-a05b-e7913934f080'})  2026-02-28 00:44:57.690357 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-dcf33d59-3ae6-5017-b2aa-1b02884ceea7', 'data_vg': 'ceph-dcf33d59-3ae6-5017-b2aa-1b02884ceea7'})  2026-02-28 00:44:57.690370 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:44:57.690382 | orchestrator | 2026-02-28 00:44:57.690394 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-02-28 00:44:57.690407 | orchestrator | Saturday 28 February 2026 00:44:54 +0000 (0:00:00.175) 0:00:26.779 ***** 2026-02-28 00:44:57.690419 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4d18609e-ecdb-578d-a05b-e7913934f080', 'data_vg': 'ceph-4d18609e-ecdb-578d-a05b-e7913934f080'})  2026-02-28 00:44:57.690431 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-dcf33d59-3ae6-5017-b2aa-1b02884ceea7', 'data_vg': 'ceph-dcf33d59-3ae6-5017-b2aa-1b02884ceea7'})  2026-02-28 00:44:57.690442 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:44:57.690452 | orchestrator | 2026-02-28 00:44:57.690463 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-02-28 00:44:57.690474 | orchestrator | Saturday 28 February 2026 00:44:54 +0000 (0:00:00.168) 0:00:26.947 ***** 2026-02-28 00:44:57.690505 | orchestrator | ok: [testbed-node-3] => { 2026-02-28 00:44:57.690517 | orchestrator |  "lvm_report": { 2026-02-28 00:44:57.690529 | orchestrator |  "lv": [ 2026-02-28 00:44:57.690540 | orchestrator |  { 2026-02-28 00:44:57.690551 | orchestrator |  "lv_name": "osd-block-4d18609e-ecdb-578d-a05b-e7913934f080", 2026-02-28 00:44:57.690563 | orchestrator |  "vg_name": "ceph-4d18609e-ecdb-578d-a05b-e7913934f080" 2026-02-28 00:44:57.690574 | orchestrator |  }, 2026-02-28 00:44:57.690585 | orchestrator |  { 2026-02-28 00:44:57.690596 | orchestrator |  "lv_name": "osd-block-dcf33d59-3ae6-5017-b2aa-1b02884ceea7", 2026-02-28 00:44:57.690607 | orchestrator |  "vg_name": "ceph-dcf33d59-3ae6-5017-b2aa-1b02884ceea7" 2026-02-28 00:44:57.690617 | orchestrator |  } 2026-02-28 00:44:57.690628 | orchestrator |  ], 2026-02-28 00:44:57.690639 | orchestrator |  "pv": [ 2026-02-28 00:44:57.690650 | orchestrator |  { 2026-02-28 00:44:57.690661 | orchestrator |  "pv_name": "/dev/sdb", 2026-02-28 00:44:57.690672 | orchestrator |  "vg_name": "ceph-4d18609e-ecdb-578d-a05b-e7913934f080" 2026-02-28 00:44:57.690683 | orchestrator |  }, 2026-02-28 00:44:57.690693 | orchestrator |  { 2026-02-28 00:44:57.690712 | orchestrator |  "pv_name": "/dev/sdc", 2026-02-28 00:44:57.690723 | orchestrator |  "vg_name": "ceph-dcf33d59-3ae6-5017-b2aa-1b02884ceea7" 2026-02-28 00:44:57.690734 | orchestrator |  } 2026-02-28 00:44:57.690745 | orchestrator |  ] 2026-02-28 00:44:57.690756 | orchestrator |  } 2026-02-28 00:44:57.690767 | orchestrator | } 2026-02-28 00:44:57.690778 | orchestrator | 2026-02-28 00:44:57.690789 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-02-28 00:44:57.690800 | orchestrator | 2026-02-28 00:44:57.690811 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-02-28 00:44:57.690822 | orchestrator | Saturday 28 February 2026 00:44:54 +0000 (0:00:00.297) 0:00:27.245 ***** 2026-02-28 00:44:57.690832 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-02-28 00:44:57.690844 | orchestrator | 2026-02-28 00:44:57.690854 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-02-28 00:44:57.690865 | orchestrator | Saturday 28 February 2026 00:44:55 +0000 (0:00:00.249) 0:00:27.495 ***** 2026-02-28 00:44:57.690876 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:44:57.690887 | orchestrator | 2026-02-28 00:44:57.690898 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:44:57.690909 | orchestrator | Saturday 28 February 2026 00:44:55 +0000 (0:00:00.222) 0:00:27.717 ***** 2026-02-28 00:44:57.690920 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-02-28 00:44:57.690931 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-02-28 00:44:57.690942 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-02-28 00:44:57.690953 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-02-28 00:44:57.690964 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-02-28 00:44:57.690975 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-02-28 00:44:57.690986 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-02-28 00:44:57.691022 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-02-28 00:44:57.691041 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-02-28 00:44:57.691053 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-02-28 00:44:57.691064 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-02-28 00:44:57.691075 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-02-28 00:44:57.691085 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-02-28 00:44:57.691096 | orchestrator | 2026-02-28 00:44:57.691107 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:44:57.691118 | orchestrator | Saturday 28 February 2026 00:44:55 +0000 (0:00:00.553) 0:00:28.271 ***** 2026-02-28 00:44:57.691129 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:44:57.691139 | orchestrator | 2026-02-28 00:44:57.691150 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:44:57.691170 | orchestrator | Saturday 28 February 2026 00:44:56 +0000 (0:00:00.211) 0:00:28.483 ***** 2026-02-28 00:44:57.691182 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:44:57.691192 | orchestrator | 2026-02-28 00:44:57.691203 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:44:57.691214 | orchestrator | Saturday 28 February 2026 00:44:56 +0000 (0:00:00.199) 0:00:28.682 ***** 2026-02-28 00:44:57.691225 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:44:57.691236 | orchestrator | 2026-02-28 00:44:57.691247 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:44:57.691265 | orchestrator | Saturday 28 February 2026 00:44:57 +0000 (0:00:00.677) 0:00:29.360 ***** 2026-02-28 00:44:57.691276 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:44:57.691287 | orchestrator | 2026-02-28 00:44:57.691298 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:44:57.691309 | orchestrator | Saturday 28 February 2026 00:44:57 +0000 (0:00:00.220) 0:00:29.581 ***** 2026-02-28 00:44:57.691320 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:44:57.691330 | orchestrator | 2026-02-28 00:44:57.691341 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:44:57.691352 | orchestrator | Saturday 28 February 2026 00:44:57 +0000 (0:00:00.223) 0:00:29.805 ***** 2026-02-28 00:44:57.691363 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:44:57.691374 | orchestrator | 2026-02-28 00:44:57.691392 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:45:09.430441 | orchestrator | Saturday 28 February 2026 00:44:57 +0000 (0:00:00.211) 0:00:30.016 ***** 2026-02-28 00:45:09.430534 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:45:09.430547 | orchestrator | 2026-02-28 00:45:09.430556 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:45:09.430565 | orchestrator | Saturday 28 February 2026 00:44:57 +0000 (0:00:00.192) 0:00:30.209 ***** 2026-02-28 00:45:09.430573 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:45:09.430581 | orchestrator | 2026-02-28 00:45:09.430590 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:45:09.430598 | orchestrator | Saturday 28 February 2026 00:44:58 +0000 (0:00:00.206) 0:00:30.416 ***** 2026-02-28 00:45:09.430606 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_85345139-bc47-4fee-b6f9-5fb160253b97) 2026-02-28 00:45:09.430615 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_85345139-bc47-4fee-b6f9-5fb160253b97) 2026-02-28 00:45:09.430623 | orchestrator | 2026-02-28 00:45:09.430631 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:45:09.430639 | orchestrator | Saturday 28 February 2026 00:44:58 +0000 (0:00:00.505) 0:00:30.922 ***** 2026-02-28 00:45:09.430647 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_2388cee9-22a9-4416-93b3-e236454bc031) 2026-02-28 00:45:09.430655 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_2388cee9-22a9-4416-93b3-e236454bc031) 2026-02-28 00:45:09.430663 | orchestrator | 2026-02-28 00:45:09.430671 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:45:09.430679 | orchestrator | Saturday 28 February 2026 00:44:59 +0000 (0:00:00.450) 0:00:31.372 ***** 2026-02-28 00:45:09.430687 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_fa3b351c-e54b-439c-bac1-d7e08e27df4b) 2026-02-28 00:45:09.430712 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_fa3b351c-e54b-439c-bac1-d7e08e27df4b) 2026-02-28 00:45:09.430720 | orchestrator | 2026-02-28 00:45:09.430728 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:45:09.430736 | orchestrator | Saturday 28 February 2026 00:44:59 +0000 (0:00:00.482) 0:00:31.855 ***** 2026-02-28 00:45:09.430758 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_151bff65-91b8-4b11-a525-96a3d98709b9) 2026-02-28 00:45:09.430767 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_151bff65-91b8-4b11-a525-96a3d98709b9) 2026-02-28 00:45:09.430775 | orchestrator | 2026-02-28 00:45:09.430783 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:45:09.430791 | orchestrator | Saturday 28 February 2026 00:45:00 +0000 (0:00:00.673) 0:00:32.528 ***** 2026-02-28 00:45:09.430799 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-02-28 00:45:09.430807 | orchestrator | 2026-02-28 00:45:09.430815 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:45:09.430823 | orchestrator | Saturday 28 February 2026 00:45:00 +0000 (0:00:00.589) 0:00:33.118 ***** 2026-02-28 00:45:09.430854 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-02-28 00:45:09.430863 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-02-28 00:45:09.430871 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-02-28 00:45:09.430879 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-02-28 00:45:09.430887 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-02-28 00:45:09.430895 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-02-28 00:45:09.430903 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-02-28 00:45:09.430911 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-02-28 00:45:09.430919 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-02-28 00:45:09.430927 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-02-28 00:45:09.430935 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-02-28 00:45:09.430943 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-02-28 00:45:09.430951 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-02-28 00:45:09.430961 | orchestrator | 2026-02-28 00:45:09.430971 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:45:09.430980 | orchestrator | Saturday 28 February 2026 00:45:01 +0000 (0:00:00.663) 0:00:33.781 ***** 2026-02-28 00:45:09.430989 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:45:09.431016 | orchestrator | 2026-02-28 00:45:09.431025 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:45:09.431034 | orchestrator | Saturday 28 February 2026 00:45:01 +0000 (0:00:00.220) 0:00:34.001 ***** 2026-02-28 00:45:09.431043 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:45:09.431053 | orchestrator | 2026-02-28 00:45:09.431062 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:45:09.431071 | orchestrator | Saturday 28 February 2026 00:45:01 +0000 (0:00:00.212) 0:00:34.214 ***** 2026-02-28 00:45:09.431080 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:45:09.431089 | orchestrator | 2026-02-28 00:45:09.431114 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:45:09.431124 | orchestrator | Saturday 28 February 2026 00:45:02 +0000 (0:00:00.199) 0:00:34.414 ***** 2026-02-28 00:45:09.431133 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:45:09.431142 | orchestrator | 2026-02-28 00:45:09.431151 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:45:09.431160 | orchestrator | Saturday 28 February 2026 00:45:02 +0000 (0:00:00.220) 0:00:34.634 ***** 2026-02-28 00:45:09.431170 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:45:09.431179 | orchestrator | 2026-02-28 00:45:09.431200 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:45:09.431220 | orchestrator | Saturday 28 February 2026 00:45:02 +0000 (0:00:00.223) 0:00:34.858 ***** 2026-02-28 00:45:09.431230 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:45:09.431239 | orchestrator | 2026-02-28 00:45:09.431248 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:45:09.431258 | orchestrator | Saturday 28 February 2026 00:45:02 +0000 (0:00:00.201) 0:00:35.059 ***** 2026-02-28 00:45:09.431267 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:45:09.431276 | orchestrator | 2026-02-28 00:45:09.431285 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:45:09.431294 | orchestrator | Saturday 28 February 2026 00:45:02 +0000 (0:00:00.205) 0:00:35.265 ***** 2026-02-28 00:45:09.431309 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:45:09.431318 | orchestrator | 2026-02-28 00:45:09.431326 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:45:09.431334 | orchestrator | Saturday 28 February 2026 00:45:03 +0000 (0:00:00.203) 0:00:35.468 ***** 2026-02-28 00:45:09.431342 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-02-28 00:45:09.431350 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-02-28 00:45:09.431369 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-02-28 00:45:09.431377 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-02-28 00:45:09.431385 | orchestrator | 2026-02-28 00:45:09.431393 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:45:09.431401 | orchestrator | Saturday 28 February 2026 00:45:04 +0000 (0:00:00.928) 0:00:36.397 ***** 2026-02-28 00:45:09.431409 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:45:09.431417 | orchestrator | 2026-02-28 00:45:09.431425 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:45:09.431433 | orchestrator | Saturday 28 February 2026 00:45:04 +0000 (0:00:00.217) 0:00:36.615 ***** 2026-02-28 00:45:09.431455 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:45:09.431464 | orchestrator | 2026-02-28 00:45:09.431472 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:45:09.431480 | orchestrator | Saturday 28 February 2026 00:45:05 +0000 (0:00:00.839) 0:00:37.454 ***** 2026-02-28 00:45:09.431488 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:45:09.431496 | orchestrator | 2026-02-28 00:45:09.431504 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:45:09.431512 | orchestrator | Saturday 28 February 2026 00:45:05 +0000 (0:00:00.293) 0:00:37.748 ***** 2026-02-28 00:45:09.431520 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:45:09.431538 | orchestrator | 2026-02-28 00:45:09.431546 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-02-28 00:45:09.431554 | orchestrator | Saturday 28 February 2026 00:45:05 +0000 (0:00:00.237) 0:00:37.985 ***** 2026-02-28 00:45:09.431562 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:45:09.431570 | orchestrator | 2026-02-28 00:45:09.431578 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-02-28 00:45:09.431585 | orchestrator | Saturday 28 February 2026 00:45:05 +0000 (0:00:00.145) 0:00:38.130 ***** 2026-02-28 00:45:09.431594 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '73c4f4bf-6139-5634-9e57-de597eca9964'}}) 2026-02-28 00:45:09.431602 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '17f6d453-f54a-57d2-bd55-b12b469b0db8'}}) 2026-02-28 00:45:09.431610 | orchestrator | 2026-02-28 00:45:09.431618 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-02-28 00:45:09.431625 | orchestrator | Saturday 28 February 2026 00:45:05 +0000 (0:00:00.186) 0:00:38.317 ***** 2026-02-28 00:45:09.431635 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-73c4f4bf-6139-5634-9e57-de597eca9964', 'data_vg': 'ceph-73c4f4bf-6139-5634-9e57-de597eca9964'}) 2026-02-28 00:45:09.431655 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-17f6d453-f54a-57d2-bd55-b12b469b0db8', 'data_vg': 'ceph-17f6d453-f54a-57d2-bd55-b12b469b0db8'}) 2026-02-28 00:45:09.431663 | orchestrator | 2026-02-28 00:45:09.431671 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-02-28 00:45:09.431689 | orchestrator | Saturday 28 February 2026 00:45:07 +0000 (0:00:01.880) 0:00:40.197 ***** 2026-02-28 00:45:09.431697 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-73c4f4bf-6139-5634-9e57-de597eca9964', 'data_vg': 'ceph-73c4f4bf-6139-5634-9e57-de597eca9964'})  2026-02-28 00:45:09.431707 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-17f6d453-f54a-57d2-bd55-b12b469b0db8', 'data_vg': 'ceph-17f6d453-f54a-57d2-bd55-b12b469b0db8'})  2026-02-28 00:45:09.431720 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:45:09.431728 | orchestrator | 2026-02-28 00:45:09.431736 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-02-28 00:45:09.431744 | orchestrator | Saturday 28 February 2026 00:45:08 +0000 (0:00:00.217) 0:00:40.414 ***** 2026-02-28 00:45:09.431753 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-73c4f4bf-6139-5634-9e57-de597eca9964', 'data_vg': 'ceph-73c4f4bf-6139-5634-9e57-de597eca9964'}) 2026-02-28 00:45:09.431766 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-17f6d453-f54a-57d2-bd55-b12b469b0db8', 'data_vg': 'ceph-17f6d453-f54a-57d2-bd55-b12b469b0db8'}) 2026-02-28 00:45:15.481868 | orchestrator | 2026-02-28 00:45:15.481984 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-02-28 00:45:15.482121 | orchestrator | Saturday 28 February 2026 00:45:09 +0000 (0:00:01.418) 0:00:41.832 ***** 2026-02-28 00:45:15.482137 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-73c4f4bf-6139-5634-9e57-de597eca9964', 'data_vg': 'ceph-73c4f4bf-6139-5634-9e57-de597eca9964'})  2026-02-28 00:45:15.482155 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-17f6d453-f54a-57d2-bd55-b12b469b0db8', 'data_vg': 'ceph-17f6d453-f54a-57d2-bd55-b12b469b0db8'})  2026-02-28 00:45:15.482170 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:45:15.482186 | orchestrator | 2026-02-28 00:45:15.482201 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-02-28 00:45:15.482216 | orchestrator | Saturday 28 February 2026 00:45:09 +0000 (0:00:00.165) 0:00:41.997 ***** 2026-02-28 00:45:15.482229 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:45:15.482244 | orchestrator | 2026-02-28 00:45:15.482258 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-02-28 00:45:15.482272 | orchestrator | Saturday 28 February 2026 00:45:09 +0000 (0:00:00.152) 0:00:42.150 ***** 2026-02-28 00:45:15.482287 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-73c4f4bf-6139-5634-9e57-de597eca9964', 'data_vg': 'ceph-73c4f4bf-6139-5634-9e57-de597eca9964'})  2026-02-28 00:45:15.482301 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-17f6d453-f54a-57d2-bd55-b12b469b0db8', 'data_vg': 'ceph-17f6d453-f54a-57d2-bd55-b12b469b0db8'})  2026-02-28 00:45:15.482316 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:45:15.482329 | orchestrator | 2026-02-28 00:45:15.482343 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-02-28 00:45:15.482357 | orchestrator | Saturday 28 February 2026 00:45:09 +0000 (0:00:00.147) 0:00:42.297 ***** 2026-02-28 00:45:15.482371 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:45:15.482385 | orchestrator | 2026-02-28 00:45:15.482400 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-02-28 00:45:15.482414 | orchestrator | Saturday 28 February 2026 00:45:10 +0000 (0:00:00.131) 0:00:42.429 ***** 2026-02-28 00:45:15.482428 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-73c4f4bf-6139-5634-9e57-de597eca9964', 'data_vg': 'ceph-73c4f4bf-6139-5634-9e57-de597eca9964'})  2026-02-28 00:45:15.482442 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-17f6d453-f54a-57d2-bd55-b12b469b0db8', 'data_vg': 'ceph-17f6d453-f54a-57d2-bd55-b12b469b0db8'})  2026-02-28 00:45:15.482456 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:45:15.482470 | orchestrator | 2026-02-28 00:45:15.482484 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-02-28 00:45:15.482498 | orchestrator | Saturday 28 February 2026 00:45:10 +0000 (0:00:00.508) 0:00:42.937 ***** 2026-02-28 00:45:15.482512 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:45:15.482526 | orchestrator | 2026-02-28 00:45:15.482540 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-02-28 00:45:15.482554 | orchestrator | Saturday 28 February 2026 00:45:10 +0000 (0:00:00.160) 0:00:43.098 ***** 2026-02-28 00:45:15.482568 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-73c4f4bf-6139-5634-9e57-de597eca9964', 'data_vg': 'ceph-73c4f4bf-6139-5634-9e57-de597eca9964'})  2026-02-28 00:45:15.482608 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-17f6d453-f54a-57d2-bd55-b12b469b0db8', 'data_vg': 'ceph-17f6d453-f54a-57d2-bd55-b12b469b0db8'})  2026-02-28 00:45:15.482623 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:45:15.482638 | orchestrator | 2026-02-28 00:45:15.482652 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-02-28 00:45:15.482684 | orchestrator | Saturday 28 February 2026 00:45:10 +0000 (0:00:00.161) 0:00:43.260 ***** 2026-02-28 00:45:15.482700 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:45:15.482715 | orchestrator | 2026-02-28 00:45:15.482729 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-02-28 00:45:15.482743 | orchestrator | Saturday 28 February 2026 00:45:11 +0000 (0:00:00.148) 0:00:43.408 ***** 2026-02-28 00:45:15.482757 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-73c4f4bf-6139-5634-9e57-de597eca9964', 'data_vg': 'ceph-73c4f4bf-6139-5634-9e57-de597eca9964'})  2026-02-28 00:45:15.482771 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-17f6d453-f54a-57d2-bd55-b12b469b0db8', 'data_vg': 'ceph-17f6d453-f54a-57d2-bd55-b12b469b0db8'})  2026-02-28 00:45:15.482785 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:45:15.482800 | orchestrator | 2026-02-28 00:45:15.482810 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-02-28 00:45:15.482817 | orchestrator | Saturday 28 February 2026 00:45:11 +0000 (0:00:00.177) 0:00:43.585 ***** 2026-02-28 00:45:15.482825 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-73c4f4bf-6139-5634-9e57-de597eca9964', 'data_vg': 'ceph-73c4f4bf-6139-5634-9e57-de597eca9964'})  2026-02-28 00:45:15.482833 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-17f6d453-f54a-57d2-bd55-b12b469b0db8', 'data_vg': 'ceph-17f6d453-f54a-57d2-bd55-b12b469b0db8'})  2026-02-28 00:45:15.482841 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:45:15.482849 | orchestrator | 2026-02-28 00:45:15.482857 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-02-28 00:45:15.482883 | orchestrator | Saturday 28 February 2026 00:45:11 +0000 (0:00:00.170) 0:00:43.756 ***** 2026-02-28 00:45:15.482892 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-73c4f4bf-6139-5634-9e57-de597eca9964', 'data_vg': 'ceph-73c4f4bf-6139-5634-9e57-de597eca9964'})  2026-02-28 00:45:15.482900 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-17f6d453-f54a-57d2-bd55-b12b469b0db8', 'data_vg': 'ceph-17f6d453-f54a-57d2-bd55-b12b469b0db8'})  2026-02-28 00:45:15.482908 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:45:15.482916 | orchestrator | 2026-02-28 00:45:15.482924 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-02-28 00:45:15.482932 | orchestrator | Saturday 28 February 2026 00:45:11 +0000 (0:00:00.176) 0:00:43.932 ***** 2026-02-28 00:45:15.482940 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:45:15.482948 | orchestrator | 2026-02-28 00:45:15.482956 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-02-28 00:45:15.482964 | orchestrator | Saturday 28 February 2026 00:45:11 +0000 (0:00:00.160) 0:00:44.093 ***** 2026-02-28 00:45:15.482972 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:45:15.482979 | orchestrator | 2026-02-28 00:45:15.482987 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-02-28 00:45:15.483047 | orchestrator | Saturday 28 February 2026 00:45:11 +0000 (0:00:00.207) 0:00:44.300 ***** 2026-02-28 00:45:15.483056 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:45:15.483064 | orchestrator | 2026-02-28 00:45:15.483072 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-02-28 00:45:15.483080 | orchestrator | Saturday 28 February 2026 00:45:12 +0000 (0:00:00.157) 0:00:44.457 ***** 2026-02-28 00:45:15.483088 | orchestrator | ok: [testbed-node-4] => { 2026-02-28 00:45:15.483096 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-02-28 00:45:15.483116 | orchestrator | } 2026-02-28 00:45:15.483125 | orchestrator | 2026-02-28 00:45:15.483133 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-02-28 00:45:15.483141 | orchestrator | Saturday 28 February 2026 00:45:12 +0000 (0:00:00.163) 0:00:44.621 ***** 2026-02-28 00:45:15.483149 | orchestrator | ok: [testbed-node-4] => { 2026-02-28 00:45:15.483157 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-02-28 00:45:15.483165 | orchestrator | } 2026-02-28 00:45:15.483173 | orchestrator | 2026-02-28 00:45:15.483186 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-02-28 00:45:15.483194 | orchestrator | Saturday 28 February 2026 00:45:12 +0000 (0:00:00.157) 0:00:44.778 ***** 2026-02-28 00:45:15.483202 | orchestrator | ok: [testbed-node-4] => { 2026-02-28 00:45:15.483210 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-02-28 00:45:15.483218 | orchestrator | } 2026-02-28 00:45:15.483226 | orchestrator | 2026-02-28 00:45:15.483234 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-02-28 00:45:15.483242 | orchestrator | Saturday 28 February 2026 00:45:12 +0000 (0:00:00.381) 0:00:45.160 ***** 2026-02-28 00:45:15.483250 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:45:15.483258 | orchestrator | 2026-02-28 00:45:15.483266 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-02-28 00:45:15.483274 | orchestrator | Saturday 28 February 2026 00:45:13 +0000 (0:00:00.519) 0:00:45.679 ***** 2026-02-28 00:45:15.483282 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:45:15.483290 | orchestrator | 2026-02-28 00:45:15.483298 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-02-28 00:45:15.483306 | orchestrator | Saturday 28 February 2026 00:45:13 +0000 (0:00:00.475) 0:00:46.154 ***** 2026-02-28 00:45:15.483314 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:45:15.483322 | orchestrator | 2026-02-28 00:45:15.483330 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-02-28 00:45:15.483338 | orchestrator | Saturday 28 February 2026 00:45:14 +0000 (0:00:00.528) 0:00:46.683 ***** 2026-02-28 00:45:15.483346 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:45:15.483354 | orchestrator | 2026-02-28 00:45:15.483362 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-02-28 00:45:15.483370 | orchestrator | Saturday 28 February 2026 00:45:14 +0000 (0:00:00.161) 0:00:46.844 ***** 2026-02-28 00:45:15.483378 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:45:15.483386 | orchestrator | 2026-02-28 00:45:15.483394 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-02-28 00:45:15.483402 | orchestrator | Saturday 28 February 2026 00:45:14 +0000 (0:00:00.112) 0:00:46.957 ***** 2026-02-28 00:45:15.483410 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:45:15.483418 | orchestrator | 2026-02-28 00:45:15.483426 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-02-28 00:45:15.483434 | orchestrator | Saturday 28 February 2026 00:45:14 +0000 (0:00:00.122) 0:00:47.079 ***** 2026-02-28 00:45:15.483442 | orchestrator | ok: [testbed-node-4] => { 2026-02-28 00:45:15.483450 | orchestrator |  "vgs_report": { 2026-02-28 00:45:15.483459 | orchestrator |  "vg": [] 2026-02-28 00:45:15.483467 | orchestrator |  } 2026-02-28 00:45:15.483475 | orchestrator | } 2026-02-28 00:45:15.483483 | orchestrator | 2026-02-28 00:45:15.483491 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-02-28 00:45:15.483499 | orchestrator | Saturday 28 February 2026 00:45:14 +0000 (0:00:00.166) 0:00:47.246 ***** 2026-02-28 00:45:15.483507 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:45:15.483516 | orchestrator | 2026-02-28 00:45:15.483524 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-02-28 00:45:15.483532 | orchestrator | Saturday 28 February 2026 00:45:15 +0000 (0:00:00.134) 0:00:47.380 ***** 2026-02-28 00:45:15.483540 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:45:15.483547 | orchestrator | 2026-02-28 00:45:15.483555 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-02-28 00:45:15.483570 | orchestrator | Saturday 28 February 2026 00:45:15 +0000 (0:00:00.146) 0:00:47.526 ***** 2026-02-28 00:45:15.483578 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:45:15.483586 | orchestrator | 2026-02-28 00:45:15.483597 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-02-28 00:45:15.483612 | orchestrator | Saturday 28 February 2026 00:45:15 +0000 (0:00:00.136) 0:00:47.663 ***** 2026-02-28 00:45:15.483626 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:45:15.483639 | orchestrator | 2026-02-28 00:45:15.483660 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-02-28 00:45:20.326635 | orchestrator | Saturday 28 February 2026 00:45:15 +0000 (0:00:00.142) 0:00:47.805 ***** 2026-02-28 00:45:20.326759 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:45:20.326777 | orchestrator | 2026-02-28 00:45:20.326791 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-02-28 00:45:20.326803 | orchestrator | Saturday 28 February 2026 00:45:15 +0000 (0:00:00.406) 0:00:48.212 ***** 2026-02-28 00:45:20.326815 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:45:20.326825 | orchestrator | 2026-02-28 00:45:20.326838 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-02-28 00:45:20.326847 | orchestrator | Saturday 28 February 2026 00:45:16 +0000 (0:00:00.161) 0:00:48.373 ***** 2026-02-28 00:45:20.326854 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:45:20.326861 | orchestrator | 2026-02-28 00:45:20.326868 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-02-28 00:45:20.326875 | orchestrator | Saturday 28 February 2026 00:45:16 +0000 (0:00:00.146) 0:00:48.520 ***** 2026-02-28 00:45:20.326882 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:45:20.326889 | orchestrator | 2026-02-28 00:45:20.326896 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-02-28 00:45:20.326902 | orchestrator | Saturday 28 February 2026 00:45:16 +0000 (0:00:00.155) 0:00:48.675 ***** 2026-02-28 00:45:20.326909 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:45:20.326916 | orchestrator | 2026-02-28 00:45:20.326923 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-02-28 00:45:20.326930 | orchestrator | Saturday 28 February 2026 00:45:16 +0000 (0:00:00.142) 0:00:48.818 ***** 2026-02-28 00:45:20.326936 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:45:20.326943 | orchestrator | 2026-02-28 00:45:20.326950 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-02-28 00:45:20.326956 | orchestrator | Saturday 28 February 2026 00:45:16 +0000 (0:00:00.150) 0:00:48.968 ***** 2026-02-28 00:45:20.326963 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:45:20.326970 | orchestrator | 2026-02-28 00:45:20.326976 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-02-28 00:45:20.326983 | orchestrator | Saturday 28 February 2026 00:45:16 +0000 (0:00:00.132) 0:00:49.100 ***** 2026-02-28 00:45:20.327045 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:45:20.327054 | orchestrator | 2026-02-28 00:45:20.327061 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-02-28 00:45:20.327068 | orchestrator | Saturday 28 February 2026 00:45:16 +0000 (0:00:00.137) 0:00:49.238 ***** 2026-02-28 00:45:20.327074 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:45:20.327082 | orchestrator | 2026-02-28 00:45:20.327089 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-02-28 00:45:20.327095 | orchestrator | Saturday 28 February 2026 00:45:17 +0000 (0:00:00.135) 0:00:49.373 ***** 2026-02-28 00:45:20.327102 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:45:20.327109 | orchestrator | 2026-02-28 00:45:20.327116 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-02-28 00:45:20.327122 | orchestrator | Saturday 28 February 2026 00:45:17 +0000 (0:00:00.146) 0:00:49.520 ***** 2026-02-28 00:45:20.327130 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-73c4f4bf-6139-5634-9e57-de597eca9964', 'data_vg': 'ceph-73c4f4bf-6139-5634-9e57-de597eca9964'})  2026-02-28 00:45:20.327155 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-17f6d453-f54a-57d2-bd55-b12b469b0db8', 'data_vg': 'ceph-17f6d453-f54a-57d2-bd55-b12b469b0db8'})  2026-02-28 00:45:20.327164 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:45:20.327171 | orchestrator | 2026-02-28 00:45:20.327179 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-02-28 00:45:20.327187 | orchestrator | Saturday 28 February 2026 00:45:17 +0000 (0:00:00.151) 0:00:49.671 ***** 2026-02-28 00:45:20.327195 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-73c4f4bf-6139-5634-9e57-de597eca9964', 'data_vg': 'ceph-73c4f4bf-6139-5634-9e57-de597eca9964'})  2026-02-28 00:45:20.327202 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-17f6d453-f54a-57d2-bd55-b12b469b0db8', 'data_vg': 'ceph-17f6d453-f54a-57d2-bd55-b12b469b0db8'})  2026-02-28 00:45:20.327210 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:45:20.327217 | orchestrator | 2026-02-28 00:45:20.327225 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-02-28 00:45:20.327233 | orchestrator | Saturday 28 February 2026 00:45:17 +0000 (0:00:00.160) 0:00:49.832 ***** 2026-02-28 00:45:20.327240 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-73c4f4bf-6139-5634-9e57-de597eca9964', 'data_vg': 'ceph-73c4f4bf-6139-5634-9e57-de597eca9964'})  2026-02-28 00:45:20.327248 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-17f6d453-f54a-57d2-bd55-b12b469b0db8', 'data_vg': 'ceph-17f6d453-f54a-57d2-bd55-b12b469b0db8'})  2026-02-28 00:45:20.327256 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:45:20.327263 | orchestrator | 2026-02-28 00:45:20.327271 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-02-28 00:45:20.327278 | orchestrator | Saturday 28 February 2026 00:45:17 +0000 (0:00:00.159) 0:00:49.991 ***** 2026-02-28 00:45:20.327288 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-73c4f4bf-6139-5634-9e57-de597eca9964', 'data_vg': 'ceph-73c4f4bf-6139-5634-9e57-de597eca9964'})  2026-02-28 00:45:20.327300 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-17f6d453-f54a-57d2-bd55-b12b469b0db8', 'data_vg': 'ceph-17f6d453-f54a-57d2-bd55-b12b469b0db8'})  2026-02-28 00:45:20.327312 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:45:20.327324 | orchestrator | 2026-02-28 00:45:20.327357 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-02-28 00:45:20.327370 | orchestrator | Saturday 28 February 2026 00:45:18 +0000 (0:00:00.396) 0:00:50.387 ***** 2026-02-28 00:45:20.327382 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-73c4f4bf-6139-5634-9e57-de597eca9964', 'data_vg': 'ceph-73c4f4bf-6139-5634-9e57-de597eca9964'})  2026-02-28 00:45:20.327396 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-17f6d453-f54a-57d2-bd55-b12b469b0db8', 'data_vg': 'ceph-17f6d453-f54a-57d2-bd55-b12b469b0db8'})  2026-02-28 00:45:20.327409 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:45:20.327421 | orchestrator | 2026-02-28 00:45:20.327429 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-02-28 00:45:20.327437 | orchestrator | Saturday 28 February 2026 00:45:18 +0000 (0:00:00.190) 0:00:50.577 ***** 2026-02-28 00:45:20.327445 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-73c4f4bf-6139-5634-9e57-de597eca9964', 'data_vg': 'ceph-73c4f4bf-6139-5634-9e57-de597eca9964'})  2026-02-28 00:45:20.327453 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-17f6d453-f54a-57d2-bd55-b12b469b0db8', 'data_vg': 'ceph-17f6d453-f54a-57d2-bd55-b12b469b0db8'})  2026-02-28 00:45:20.327460 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:45:20.327468 | orchestrator | 2026-02-28 00:45:20.327476 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-02-28 00:45:20.327484 | orchestrator | Saturday 28 February 2026 00:45:18 +0000 (0:00:00.151) 0:00:50.729 ***** 2026-02-28 00:45:20.327491 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-73c4f4bf-6139-5634-9e57-de597eca9964', 'data_vg': 'ceph-73c4f4bf-6139-5634-9e57-de597eca9964'})  2026-02-28 00:45:20.327507 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-17f6d453-f54a-57d2-bd55-b12b469b0db8', 'data_vg': 'ceph-17f6d453-f54a-57d2-bd55-b12b469b0db8'})  2026-02-28 00:45:20.327515 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:45:20.327523 | orchestrator | 2026-02-28 00:45:20.327531 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-02-28 00:45:20.327539 | orchestrator | Saturday 28 February 2026 00:45:18 +0000 (0:00:00.152) 0:00:50.881 ***** 2026-02-28 00:45:20.327546 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-73c4f4bf-6139-5634-9e57-de597eca9964', 'data_vg': 'ceph-73c4f4bf-6139-5634-9e57-de597eca9964'})  2026-02-28 00:45:20.327553 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-17f6d453-f54a-57d2-bd55-b12b469b0db8', 'data_vg': 'ceph-17f6d453-f54a-57d2-bd55-b12b469b0db8'})  2026-02-28 00:45:20.327560 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:45:20.327566 | orchestrator | 2026-02-28 00:45:20.327573 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-02-28 00:45:20.327580 | orchestrator | Saturday 28 February 2026 00:45:18 +0000 (0:00:00.159) 0:00:51.041 ***** 2026-02-28 00:45:20.327587 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:45:20.327594 | orchestrator | 2026-02-28 00:45:20.327601 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-02-28 00:45:20.327607 | orchestrator | Saturday 28 February 2026 00:45:19 +0000 (0:00:00.518) 0:00:51.559 ***** 2026-02-28 00:45:20.327614 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:45:20.327621 | orchestrator | 2026-02-28 00:45:20.327628 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-02-28 00:45:20.327635 | orchestrator | Saturday 28 February 2026 00:45:19 +0000 (0:00:00.520) 0:00:52.080 ***** 2026-02-28 00:45:20.327641 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:45:20.327648 | orchestrator | 2026-02-28 00:45:20.327655 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-02-28 00:45:20.327662 | orchestrator | Saturday 28 February 2026 00:45:19 +0000 (0:00:00.157) 0:00:52.237 ***** 2026-02-28 00:45:20.327668 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-17f6d453-f54a-57d2-bd55-b12b469b0db8', 'vg_name': 'ceph-17f6d453-f54a-57d2-bd55-b12b469b0db8'}) 2026-02-28 00:45:20.327677 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-73c4f4bf-6139-5634-9e57-de597eca9964', 'vg_name': 'ceph-73c4f4bf-6139-5634-9e57-de597eca9964'}) 2026-02-28 00:45:20.327684 | orchestrator | 2026-02-28 00:45:20.327691 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-02-28 00:45:20.327697 | orchestrator | Saturday 28 February 2026 00:45:20 +0000 (0:00:00.171) 0:00:52.409 ***** 2026-02-28 00:45:20.327704 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-73c4f4bf-6139-5634-9e57-de597eca9964', 'data_vg': 'ceph-73c4f4bf-6139-5634-9e57-de597eca9964'})  2026-02-28 00:45:20.327711 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-17f6d453-f54a-57d2-bd55-b12b469b0db8', 'data_vg': 'ceph-17f6d453-f54a-57d2-bd55-b12b469b0db8'})  2026-02-28 00:45:20.327718 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:45:20.327724 | orchestrator | 2026-02-28 00:45:20.327731 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-02-28 00:45:20.327738 | orchestrator | Saturday 28 February 2026 00:45:20 +0000 (0:00:00.169) 0:00:52.579 ***** 2026-02-28 00:45:20.327745 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-73c4f4bf-6139-5634-9e57-de597eca9964', 'data_vg': 'ceph-73c4f4bf-6139-5634-9e57-de597eca9964'})  2026-02-28 00:45:20.327757 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-17f6d453-f54a-57d2-bd55-b12b469b0db8', 'data_vg': 'ceph-17f6d453-f54a-57d2-bd55-b12b469b0db8'})  2026-02-28 00:45:26.575844 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:45:26.575965 | orchestrator | 2026-02-28 00:45:26.575978 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-02-28 00:45:26.576014 | orchestrator | Saturday 28 February 2026 00:45:20 +0000 (0:00:00.158) 0:00:52.737 ***** 2026-02-28 00:45:26.576030 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-73c4f4bf-6139-5634-9e57-de597eca9964', 'data_vg': 'ceph-73c4f4bf-6139-5634-9e57-de597eca9964'})  2026-02-28 00:45:26.576040 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-17f6d453-f54a-57d2-bd55-b12b469b0db8', 'data_vg': 'ceph-17f6d453-f54a-57d2-bd55-b12b469b0db8'})  2026-02-28 00:45:26.576048 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:45:26.576056 | orchestrator | 2026-02-28 00:45:26.576064 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-02-28 00:45:26.576072 | orchestrator | Saturday 28 February 2026 00:45:20 +0000 (0:00:00.171) 0:00:52.909 ***** 2026-02-28 00:45:26.576080 | orchestrator | ok: [testbed-node-4] => { 2026-02-28 00:45:26.576088 | orchestrator |  "lvm_report": { 2026-02-28 00:45:26.576098 | orchestrator |  "lv": [ 2026-02-28 00:45:26.576106 | orchestrator |  { 2026-02-28 00:45:26.576115 | orchestrator |  "lv_name": "osd-block-17f6d453-f54a-57d2-bd55-b12b469b0db8", 2026-02-28 00:45:26.576124 | orchestrator |  "vg_name": "ceph-17f6d453-f54a-57d2-bd55-b12b469b0db8" 2026-02-28 00:45:26.576132 | orchestrator |  }, 2026-02-28 00:45:26.576140 | orchestrator |  { 2026-02-28 00:45:26.576148 | orchestrator |  "lv_name": "osd-block-73c4f4bf-6139-5634-9e57-de597eca9964", 2026-02-28 00:45:26.576156 | orchestrator |  "vg_name": "ceph-73c4f4bf-6139-5634-9e57-de597eca9964" 2026-02-28 00:45:26.576164 | orchestrator |  } 2026-02-28 00:45:26.576172 | orchestrator |  ], 2026-02-28 00:45:26.576180 | orchestrator |  "pv": [ 2026-02-28 00:45:26.576188 | orchestrator |  { 2026-02-28 00:45:26.576196 | orchestrator |  "pv_name": "/dev/sdb", 2026-02-28 00:45:26.576209 | orchestrator |  "vg_name": "ceph-73c4f4bf-6139-5634-9e57-de597eca9964" 2026-02-28 00:45:26.576217 | orchestrator |  }, 2026-02-28 00:45:26.576225 | orchestrator |  { 2026-02-28 00:45:26.576233 | orchestrator |  "pv_name": "/dev/sdc", 2026-02-28 00:45:26.576241 | orchestrator |  "vg_name": "ceph-17f6d453-f54a-57d2-bd55-b12b469b0db8" 2026-02-28 00:45:26.576249 | orchestrator |  } 2026-02-28 00:45:26.576257 | orchestrator |  ] 2026-02-28 00:45:26.576265 | orchestrator |  } 2026-02-28 00:45:26.576273 | orchestrator | } 2026-02-28 00:45:26.576281 | orchestrator | 2026-02-28 00:45:26.576289 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-02-28 00:45:26.576298 | orchestrator | 2026-02-28 00:45:26.576306 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-02-28 00:45:26.576314 | orchestrator | Saturday 28 February 2026 00:45:21 +0000 (0:00:00.507) 0:00:53.416 ***** 2026-02-28 00:45:26.576322 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-02-28 00:45:26.576330 | orchestrator | 2026-02-28 00:45:26.576339 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-02-28 00:45:26.576348 | orchestrator | Saturday 28 February 2026 00:45:21 +0000 (0:00:00.263) 0:00:53.679 ***** 2026-02-28 00:45:26.576358 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:45:26.576366 | orchestrator | 2026-02-28 00:45:26.576376 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:45:26.576385 | orchestrator | Saturday 28 February 2026 00:45:21 +0000 (0:00:00.235) 0:00:53.915 ***** 2026-02-28 00:45:26.576394 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-02-28 00:45:26.576403 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-02-28 00:45:26.576412 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-02-28 00:45:26.576421 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-02-28 00:45:26.576437 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-02-28 00:45:26.576446 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-02-28 00:45:26.576455 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-02-28 00:45:26.576464 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-02-28 00:45:26.576473 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-02-28 00:45:26.576485 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-02-28 00:45:26.576494 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-02-28 00:45:26.576503 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-02-28 00:45:26.576512 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-02-28 00:45:26.576521 | orchestrator | 2026-02-28 00:45:26.576531 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:45:26.576540 | orchestrator | Saturday 28 February 2026 00:45:22 +0000 (0:00:00.419) 0:00:54.335 ***** 2026-02-28 00:45:26.576549 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:45:26.576558 | orchestrator | 2026-02-28 00:45:26.576567 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:45:26.576576 | orchestrator | Saturday 28 February 2026 00:45:22 +0000 (0:00:00.212) 0:00:54.548 ***** 2026-02-28 00:45:26.576584 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:45:26.576593 | orchestrator | 2026-02-28 00:45:26.576602 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:45:26.576627 | orchestrator | Saturday 28 February 2026 00:45:22 +0000 (0:00:00.202) 0:00:54.751 ***** 2026-02-28 00:45:26.576636 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:45:26.576645 | orchestrator | 2026-02-28 00:45:26.576654 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:45:26.576663 | orchestrator | Saturday 28 February 2026 00:45:22 +0000 (0:00:00.207) 0:00:54.958 ***** 2026-02-28 00:45:26.576672 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:45:26.576681 | orchestrator | 2026-02-28 00:45:26.576690 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:45:26.576699 | orchestrator | Saturday 28 February 2026 00:45:22 +0000 (0:00:00.263) 0:00:55.222 ***** 2026-02-28 00:45:26.576707 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:45:26.576715 | orchestrator | 2026-02-28 00:45:26.576723 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:45:26.576731 | orchestrator | Saturday 28 February 2026 00:45:23 +0000 (0:00:00.204) 0:00:55.427 ***** 2026-02-28 00:45:26.576739 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:45:26.576747 | orchestrator | 2026-02-28 00:45:26.576755 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:45:26.576763 | orchestrator | Saturday 28 February 2026 00:45:23 +0000 (0:00:00.648) 0:00:56.076 ***** 2026-02-28 00:45:26.576771 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:45:26.576782 | orchestrator | 2026-02-28 00:45:26.576795 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:45:26.576809 | orchestrator | Saturday 28 February 2026 00:45:23 +0000 (0:00:00.200) 0:00:56.277 ***** 2026-02-28 00:45:26.576823 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:45:26.576837 | orchestrator | 2026-02-28 00:45:26.576851 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:45:26.576865 | orchestrator | Saturday 28 February 2026 00:45:24 +0000 (0:00:00.209) 0:00:56.486 ***** 2026-02-28 00:45:26.576874 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_ea0db2d4-7821-45ce-aa4b-0ff26e9cf878) 2026-02-28 00:45:26.576893 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_ea0db2d4-7821-45ce-aa4b-0ff26e9cf878) 2026-02-28 00:45:26.576905 | orchestrator | 2026-02-28 00:45:26.576914 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:45:26.576922 | orchestrator | Saturday 28 February 2026 00:45:24 +0000 (0:00:00.434) 0:00:56.920 ***** 2026-02-28 00:45:26.576929 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_96cb3389-09b8-4702-8328-a447a406a3bc) 2026-02-28 00:45:26.576937 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_96cb3389-09b8-4702-8328-a447a406a3bc) 2026-02-28 00:45:26.576945 | orchestrator | 2026-02-28 00:45:26.576953 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:45:26.576961 | orchestrator | Saturday 28 February 2026 00:45:25 +0000 (0:00:00.431) 0:00:57.351 ***** 2026-02-28 00:45:26.576969 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_810ccfde-b37c-4538-b69c-a55db736621a) 2026-02-28 00:45:26.576976 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_810ccfde-b37c-4538-b69c-a55db736621a) 2026-02-28 00:45:26.576984 | orchestrator | 2026-02-28 00:45:26.577027 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:45:26.577039 | orchestrator | Saturday 28 February 2026 00:45:25 +0000 (0:00:00.430) 0:00:57.782 ***** 2026-02-28 00:45:26.577052 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_aebfdaae-19e6-4277-9533-aca5f477cfa9) 2026-02-28 00:45:26.577065 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_aebfdaae-19e6-4277-9533-aca5f477cfa9) 2026-02-28 00:45:26.577078 | orchestrator | 2026-02-28 00:45:26.577092 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:45:26.577106 | orchestrator | Saturday 28 February 2026 00:45:25 +0000 (0:00:00.424) 0:00:58.207 ***** 2026-02-28 00:45:26.577120 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-02-28 00:45:26.577133 | orchestrator | 2026-02-28 00:45:26.577144 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:45:26.577152 | orchestrator | Saturday 28 February 2026 00:45:26 +0000 (0:00:00.344) 0:00:58.551 ***** 2026-02-28 00:45:26.577160 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-02-28 00:45:26.577168 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-02-28 00:45:26.577176 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-02-28 00:45:26.577199 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-02-28 00:45:26.577208 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-02-28 00:45:26.577215 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-02-28 00:45:26.577226 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-02-28 00:45:26.577240 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-02-28 00:45:26.577254 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-02-28 00:45:26.577267 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-02-28 00:45:26.577282 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-02-28 00:45:26.577304 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-02-28 00:45:35.416940 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-02-28 00:45:35.417103 | orchestrator | 2026-02-28 00:45:35.417122 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:45:35.417135 | orchestrator | Saturday 28 February 2026 00:45:26 +0000 (0:00:00.434) 0:00:58.986 ***** 2026-02-28 00:45:35.417174 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:45:35.417188 | orchestrator | 2026-02-28 00:45:35.417199 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:45:35.417210 | orchestrator | Saturday 28 February 2026 00:45:26 +0000 (0:00:00.203) 0:00:59.190 ***** 2026-02-28 00:45:35.417221 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:45:35.417232 | orchestrator | 2026-02-28 00:45:35.417243 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:45:35.417254 | orchestrator | Saturday 28 February 2026 00:45:27 +0000 (0:00:00.679) 0:00:59.869 ***** 2026-02-28 00:45:35.417264 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:45:35.417275 | orchestrator | 2026-02-28 00:45:35.417286 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:45:35.417297 | orchestrator | Saturday 28 February 2026 00:45:27 +0000 (0:00:00.203) 0:01:00.072 ***** 2026-02-28 00:45:35.417308 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:45:35.417319 | orchestrator | 2026-02-28 00:45:35.417330 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:45:35.417341 | orchestrator | Saturday 28 February 2026 00:45:27 +0000 (0:00:00.223) 0:01:00.296 ***** 2026-02-28 00:45:35.417351 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:45:35.417362 | orchestrator | 2026-02-28 00:45:35.417373 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:45:35.417384 | orchestrator | Saturday 28 February 2026 00:45:28 +0000 (0:00:00.208) 0:01:00.504 ***** 2026-02-28 00:45:35.417395 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:45:35.417405 | orchestrator | 2026-02-28 00:45:35.417430 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:45:35.417441 | orchestrator | Saturday 28 February 2026 00:45:28 +0000 (0:00:00.220) 0:01:00.724 ***** 2026-02-28 00:45:35.417452 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:45:35.417463 | orchestrator | 2026-02-28 00:45:35.417474 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:45:35.417485 | orchestrator | Saturday 28 February 2026 00:45:28 +0000 (0:00:00.209) 0:01:00.934 ***** 2026-02-28 00:45:35.417496 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:45:35.417506 | orchestrator | 2026-02-28 00:45:35.417517 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:45:35.417528 | orchestrator | Saturday 28 February 2026 00:45:28 +0000 (0:00:00.212) 0:01:01.147 ***** 2026-02-28 00:45:35.417539 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-02-28 00:45:35.417551 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-02-28 00:45:35.417562 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-02-28 00:45:35.417573 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-02-28 00:45:35.417584 | orchestrator | 2026-02-28 00:45:35.417595 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:45:35.417606 | orchestrator | Saturday 28 February 2026 00:45:29 +0000 (0:00:00.647) 0:01:01.795 ***** 2026-02-28 00:45:35.417617 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:45:35.417628 | orchestrator | 2026-02-28 00:45:35.417639 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:45:35.417650 | orchestrator | Saturday 28 February 2026 00:45:29 +0000 (0:00:00.210) 0:01:02.005 ***** 2026-02-28 00:45:35.417660 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:45:35.417671 | orchestrator | 2026-02-28 00:45:35.417682 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:45:35.417693 | orchestrator | Saturday 28 February 2026 00:45:29 +0000 (0:00:00.191) 0:01:02.196 ***** 2026-02-28 00:45:35.417704 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:45:35.417714 | orchestrator | 2026-02-28 00:45:35.417725 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:45:35.417736 | orchestrator | Saturday 28 February 2026 00:45:30 +0000 (0:00:00.195) 0:01:02.392 ***** 2026-02-28 00:45:35.417756 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:45:35.417767 | orchestrator | 2026-02-28 00:45:35.417778 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-02-28 00:45:35.417789 | orchestrator | Saturday 28 February 2026 00:45:30 +0000 (0:00:00.206) 0:01:02.599 ***** 2026-02-28 00:45:35.417800 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:45:35.417811 | orchestrator | 2026-02-28 00:45:35.417822 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-02-28 00:45:35.417833 | orchestrator | Saturday 28 February 2026 00:45:30 +0000 (0:00:00.327) 0:01:02.926 ***** 2026-02-28 00:45:35.417843 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '04fa4cbf-2eb3-5c27-a3dd-f7c2dcd9ac18'}}) 2026-02-28 00:45:35.417855 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'f45f70cf-4b1a-5b52-bc0a-6a4d28c0a539'}}) 2026-02-28 00:45:35.417865 | orchestrator | 2026-02-28 00:45:35.417876 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-02-28 00:45:35.417887 | orchestrator | Saturday 28 February 2026 00:45:30 +0000 (0:00:00.204) 0:01:03.130 ***** 2026-02-28 00:45:35.417900 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-04fa4cbf-2eb3-5c27-a3dd-f7c2dcd9ac18', 'data_vg': 'ceph-04fa4cbf-2eb3-5c27-a3dd-f7c2dcd9ac18'}) 2026-02-28 00:45:35.417912 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-f45f70cf-4b1a-5b52-bc0a-6a4d28c0a539', 'data_vg': 'ceph-f45f70cf-4b1a-5b52-bc0a-6a4d28c0a539'}) 2026-02-28 00:45:35.417923 | orchestrator | 2026-02-28 00:45:35.417934 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-02-28 00:45:35.417963 | orchestrator | Saturday 28 February 2026 00:45:32 +0000 (0:00:01.816) 0:01:04.947 ***** 2026-02-28 00:45:35.417976 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-04fa4cbf-2eb3-5c27-a3dd-f7c2dcd9ac18', 'data_vg': 'ceph-04fa4cbf-2eb3-5c27-a3dd-f7c2dcd9ac18'})  2026-02-28 00:45:35.418008 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f45f70cf-4b1a-5b52-bc0a-6a4d28c0a539', 'data_vg': 'ceph-f45f70cf-4b1a-5b52-bc0a-6a4d28c0a539'})  2026-02-28 00:45:35.418113 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:45:35.418127 | orchestrator | 2026-02-28 00:45:35.418138 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-02-28 00:45:35.418149 | orchestrator | Saturday 28 February 2026 00:45:32 +0000 (0:00:00.157) 0:01:05.104 ***** 2026-02-28 00:45:35.418160 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-04fa4cbf-2eb3-5c27-a3dd-f7c2dcd9ac18', 'data_vg': 'ceph-04fa4cbf-2eb3-5c27-a3dd-f7c2dcd9ac18'}) 2026-02-28 00:45:35.418171 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-f45f70cf-4b1a-5b52-bc0a-6a4d28c0a539', 'data_vg': 'ceph-f45f70cf-4b1a-5b52-bc0a-6a4d28c0a539'}) 2026-02-28 00:45:35.418182 | orchestrator | 2026-02-28 00:45:35.418193 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-02-28 00:45:35.418204 | orchestrator | Saturday 28 February 2026 00:45:33 +0000 (0:00:01.207) 0:01:06.311 ***** 2026-02-28 00:45:35.418215 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-04fa4cbf-2eb3-5c27-a3dd-f7c2dcd9ac18', 'data_vg': 'ceph-04fa4cbf-2eb3-5c27-a3dd-f7c2dcd9ac18'})  2026-02-28 00:45:35.418226 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f45f70cf-4b1a-5b52-bc0a-6a4d28c0a539', 'data_vg': 'ceph-f45f70cf-4b1a-5b52-bc0a-6a4d28c0a539'})  2026-02-28 00:45:35.418237 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:45:35.418248 | orchestrator | 2026-02-28 00:45:35.418259 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-02-28 00:45:35.418270 | orchestrator | Saturday 28 February 2026 00:45:34 +0000 (0:00:00.143) 0:01:06.455 ***** 2026-02-28 00:45:35.418281 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:45:35.418292 | orchestrator | 2026-02-28 00:45:35.418303 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-02-28 00:45:35.418314 | orchestrator | Saturday 28 February 2026 00:45:34 +0000 (0:00:00.138) 0:01:06.593 ***** 2026-02-28 00:45:35.418334 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-04fa4cbf-2eb3-5c27-a3dd-f7c2dcd9ac18', 'data_vg': 'ceph-04fa4cbf-2eb3-5c27-a3dd-f7c2dcd9ac18'})  2026-02-28 00:45:35.418345 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f45f70cf-4b1a-5b52-bc0a-6a4d28c0a539', 'data_vg': 'ceph-f45f70cf-4b1a-5b52-bc0a-6a4d28c0a539'})  2026-02-28 00:45:35.418357 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:45:35.418367 | orchestrator | 2026-02-28 00:45:35.418379 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-02-28 00:45:35.418390 | orchestrator | Saturday 28 February 2026 00:45:34 +0000 (0:00:00.149) 0:01:06.742 ***** 2026-02-28 00:45:35.418400 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:45:35.418411 | orchestrator | 2026-02-28 00:45:35.418422 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-02-28 00:45:35.418433 | orchestrator | Saturday 28 February 2026 00:45:34 +0000 (0:00:00.115) 0:01:06.858 ***** 2026-02-28 00:45:35.418444 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-04fa4cbf-2eb3-5c27-a3dd-f7c2dcd9ac18', 'data_vg': 'ceph-04fa4cbf-2eb3-5c27-a3dd-f7c2dcd9ac18'})  2026-02-28 00:45:35.418455 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f45f70cf-4b1a-5b52-bc0a-6a4d28c0a539', 'data_vg': 'ceph-f45f70cf-4b1a-5b52-bc0a-6a4d28c0a539'})  2026-02-28 00:45:35.418479 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:45:35.418490 | orchestrator | 2026-02-28 00:45:35.418501 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-02-28 00:45:35.418521 | orchestrator | Saturday 28 February 2026 00:45:34 +0000 (0:00:00.128) 0:01:06.987 ***** 2026-02-28 00:45:35.418532 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:45:35.418543 | orchestrator | 2026-02-28 00:45:35.418554 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-02-28 00:45:35.418565 | orchestrator | Saturday 28 February 2026 00:45:34 +0000 (0:00:00.129) 0:01:07.116 ***** 2026-02-28 00:45:35.418576 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-04fa4cbf-2eb3-5c27-a3dd-f7c2dcd9ac18', 'data_vg': 'ceph-04fa4cbf-2eb3-5c27-a3dd-f7c2dcd9ac18'})  2026-02-28 00:45:35.418587 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f45f70cf-4b1a-5b52-bc0a-6a4d28c0a539', 'data_vg': 'ceph-f45f70cf-4b1a-5b52-bc0a-6a4d28c0a539'})  2026-02-28 00:45:35.418598 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:45:35.418609 | orchestrator | 2026-02-28 00:45:35.418620 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-02-28 00:45:35.418631 | orchestrator | Saturday 28 February 2026 00:45:34 +0000 (0:00:00.147) 0:01:07.264 ***** 2026-02-28 00:45:35.418642 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:45:35.418653 | orchestrator | 2026-02-28 00:45:35.418664 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-02-28 00:45:35.418675 | orchestrator | Saturday 28 February 2026 00:45:35 +0000 (0:00:00.398) 0:01:07.663 ***** 2026-02-28 00:45:35.418696 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-04fa4cbf-2eb3-5c27-a3dd-f7c2dcd9ac18', 'data_vg': 'ceph-04fa4cbf-2eb3-5c27-a3dd-f7c2dcd9ac18'})  2026-02-28 00:45:41.940614 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f45f70cf-4b1a-5b52-bc0a-6a4d28c0a539', 'data_vg': 'ceph-f45f70cf-4b1a-5b52-bc0a-6a4d28c0a539'})  2026-02-28 00:45:41.940718 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:45:41.940734 | orchestrator | 2026-02-28 00:45:41.940747 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-02-28 00:45:41.940760 | orchestrator | Saturday 28 February 2026 00:45:35 +0000 (0:00:00.164) 0:01:07.828 ***** 2026-02-28 00:45:41.940771 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-04fa4cbf-2eb3-5c27-a3dd-f7c2dcd9ac18', 'data_vg': 'ceph-04fa4cbf-2eb3-5c27-a3dd-f7c2dcd9ac18'})  2026-02-28 00:45:41.940783 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f45f70cf-4b1a-5b52-bc0a-6a4d28c0a539', 'data_vg': 'ceph-f45f70cf-4b1a-5b52-bc0a-6a4d28c0a539'})  2026-02-28 00:45:41.940819 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:45:41.940830 | orchestrator | 2026-02-28 00:45:41.940842 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-02-28 00:45:41.940853 | orchestrator | Saturday 28 February 2026 00:45:35 +0000 (0:00:00.159) 0:01:07.988 ***** 2026-02-28 00:45:41.940864 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-04fa4cbf-2eb3-5c27-a3dd-f7c2dcd9ac18', 'data_vg': 'ceph-04fa4cbf-2eb3-5c27-a3dd-f7c2dcd9ac18'})  2026-02-28 00:45:41.940875 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f45f70cf-4b1a-5b52-bc0a-6a4d28c0a539', 'data_vg': 'ceph-f45f70cf-4b1a-5b52-bc0a-6a4d28c0a539'})  2026-02-28 00:45:41.940886 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:45:41.940897 | orchestrator | 2026-02-28 00:45:41.940908 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-02-28 00:45:41.940936 | orchestrator | Saturday 28 February 2026 00:45:35 +0000 (0:00:00.184) 0:01:08.172 ***** 2026-02-28 00:45:41.940947 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:45:41.940958 | orchestrator | 2026-02-28 00:45:41.940969 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-02-28 00:45:41.940980 | orchestrator | Saturday 28 February 2026 00:45:35 +0000 (0:00:00.145) 0:01:08.318 ***** 2026-02-28 00:45:41.941027 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:45:41.941038 | orchestrator | 2026-02-28 00:45:41.941049 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-02-28 00:45:41.941060 | orchestrator | Saturday 28 February 2026 00:45:36 +0000 (0:00:00.146) 0:01:08.464 ***** 2026-02-28 00:45:41.941071 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:45:41.941082 | orchestrator | 2026-02-28 00:45:41.941093 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-02-28 00:45:41.941104 | orchestrator | Saturday 28 February 2026 00:45:36 +0000 (0:00:00.185) 0:01:08.650 ***** 2026-02-28 00:45:41.941115 | orchestrator | ok: [testbed-node-5] => { 2026-02-28 00:45:41.941127 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-02-28 00:45:41.941138 | orchestrator | } 2026-02-28 00:45:41.941152 | orchestrator | 2026-02-28 00:45:41.941165 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-02-28 00:45:41.941178 | orchestrator | Saturday 28 February 2026 00:45:36 +0000 (0:00:00.176) 0:01:08.826 ***** 2026-02-28 00:45:41.941190 | orchestrator | ok: [testbed-node-5] => { 2026-02-28 00:45:41.941203 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-02-28 00:45:41.941216 | orchestrator | } 2026-02-28 00:45:41.941228 | orchestrator | 2026-02-28 00:45:41.941241 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-02-28 00:45:41.941253 | orchestrator | Saturday 28 February 2026 00:45:36 +0000 (0:00:00.203) 0:01:09.029 ***** 2026-02-28 00:45:41.941266 | orchestrator | ok: [testbed-node-5] => { 2026-02-28 00:45:41.941278 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-02-28 00:45:41.941291 | orchestrator | } 2026-02-28 00:45:41.941303 | orchestrator | 2026-02-28 00:45:41.941316 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-02-28 00:45:41.941328 | orchestrator | Saturday 28 February 2026 00:45:36 +0000 (0:00:00.155) 0:01:09.184 ***** 2026-02-28 00:45:41.941341 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:45:41.941353 | orchestrator | 2026-02-28 00:45:41.941366 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-02-28 00:45:41.941379 | orchestrator | Saturday 28 February 2026 00:45:37 +0000 (0:00:00.537) 0:01:09.721 ***** 2026-02-28 00:45:41.941390 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:45:41.941403 | orchestrator | 2026-02-28 00:45:41.941415 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-02-28 00:45:41.941428 | orchestrator | Saturday 28 February 2026 00:45:37 +0000 (0:00:00.532) 0:01:10.254 ***** 2026-02-28 00:45:41.941440 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:45:41.941460 | orchestrator | 2026-02-28 00:45:41.941472 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-02-28 00:45:41.941485 | orchestrator | Saturday 28 February 2026 00:45:38 +0000 (0:00:00.739) 0:01:10.994 ***** 2026-02-28 00:45:41.941498 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:45:41.941509 | orchestrator | 2026-02-28 00:45:41.941520 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-02-28 00:45:41.941531 | orchestrator | Saturday 28 February 2026 00:45:38 +0000 (0:00:00.145) 0:01:11.139 ***** 2026-02-28 00:45:41.941541 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:45:41.941553 | orchestrator | 2026-02-28 00:45:41.941563 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-02-28 00:45:41.941575 | orchestrator | Saturday 28 February 2026 00:45:38 +0000 (0:00:00.106) 0:01:11.245 ***** 2026-02-28 00:45:41.941586 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:45:41.941597 | orchestrator | 2026-02-28 00:45:41.941607 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-02-28 00:45:41.941619 | orchestrator | Saturday 28 February 2026 00:45:39 +0000 (0:00:00.125) 0:01:11.371 ***** 2026-02-28 00:45:41.941630 | orchestrator | ok: [testbed-node-5] => { 2026-02-28 00:45:41.941641 | orchestrator |  "vgs_report": { 2026-02-28 00:45:41.941653 | orchestrator |  "vg": [] 2026-02-28 00:45:41.941681 | orchestrator |  } 2026-02-28 00:45:41.941693 | orchestrator | } 2026-02-28 00:45:41.941704 | orchestrator | 2026-02-28 00:45:41.941715 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-02-28 00:45:41.941726 | orchestrator | Saturday 28 February 2026 00:45:39 +0000 (0:00:00.149) 0:01:11.520 ***** 2026-02-28 00:45:41.941737 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:45:41.941748 | orchestrator | 2026-02-28 00:45:41.941759 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-02-28 00:45:41.941770 | orchestrator | Saturday 28 February 2026 00:45:39 +0000 (0:00:00.191) 0:01:11.712 ***** 2026-02-28 00:45:41.941781 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:45:41.941792 | orchestrator | 2026-02-28 00:45:41.941803 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-02-28 00:45:41.941814 | orchestrator | Saturday 28 February 2026 00:45:39 +0000 (0:00:00.136) 0:01:11.848 ***** 2026-02-28 00:45:41.941825 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:45:41.941836 | orchestrator | 2026-02-28 00:45:41.941846 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-02-28 00:45:41.941858 | orchestrator | Saturday 28 February 2026 00:45:39 +0000 (0:00:00.133) 0:01:11.982 ***** 2026-02-28 00:45:41.941868 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:45:41.941879 | orchestrator | 2026-02-28 00:45:41.941890 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-02-28 00:45:41.941901 | orchestrator | Saturday 28 February 2026 00:45:39 +0000 (0:00:00.126) 0:01:12.108 ***** 2026-02-28 00:45:41.941912 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:45:41.941923 | orchestrator | 2026-02-28 00:45:41.941934 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-02-28 00:45:41.941945 | orchestrator | Saturday 28 February 2026 00:45:39 +0000 (0:00:00.158) 0:01:12.267 ***** 2026-02-28 00:45:41.941955 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:45:41.941966 | orchestrator | 2026-02-28 00:45:41.941977 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-02-28 00:45:41.942156 | orchestrator | Saturday 28 February 2026 00:45:40 +0000 (0:00:00.151) 0:01:12.418 ***** 2026-02-28 00:45:41.942172 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:45:41.942183 | orchestrator | 2026-02-28 00:45:41.942194 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-02-28 00:45:41.942205 | orchestrator | Saturday 28 February 2026 00:45:40 +0000 (0:00:00.163) 0:01:12.582 ***** 2026-02-28 00:45:41.942216 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:45:41.942227 | orchestrator | 2026-02-28 00:45:41.942238 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-02-28 00:45:41.942258 | orchestrator | Saturday 28 February 2026 00:45:40 +0000 (0:00:00.462) 0:01:13.045 ***** 2026-02-28 00:45:41.942269 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:45:41.942279 | orchestrator | 2026-02-28 00:45:41.942290 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-02-28 00:45:41.942301 | orchestrator | Saturday 28 February 2026 00:45:40 +0000 (0:00:00.145) 0:01:13.190 ***** 2026-02-28 00:45:41.942312 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:45:41.942323 | orchestrator | 2026-02-28 00:45:41.942334 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-02-28 00:45:41.942345 | orchestrator | Saturday 28 February 2026 00:45:40 +0000 (0:00:00.138) 0:01:13.329 ***** 2026-02-28 00:45:41.942356 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:45:41.942367 | orchestrator | 2026-02-28 00:45:41.942378 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-02-28 00:45:41.942389 | orchestrator | Saturday 28 February 2026 00:45:41 +0000 (0:00:00.136) 0:01:13.465 ***** 2026-02-28 00:45:41.942400 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:45:41.942410 | orchestrator | 2026-02-28 00:45:41.942421 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-02-28 00:45:41.942432 | orchestrator | Saturday 28 February 2026 00:45:41 +0000 (0:00:00.150) 0:01:13.615 ***** 2026-02-28 00:45:41.942443 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:45:41.942454 | orchestrator | 2026-02-28 00:45:41.942465 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-02-28 00:45:41.942476 | orchestrator | Saturday 28 February 2026 00:45:41 +0000 (0:00:00.147) 0:01:13.763 ***** 2026-02-28 00:45:41.942487 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:45:41.942497 | orchestrator | 2026-02-28 00:45:41.942508 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-02-28 00:45:41.942519 | orchestrator | Saturday 28 February 2026 00:45:41 +0000 (0:00:00.138) 0:01:13.902 ***** 2026-02-28 00:45:41.942531 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-04fa4cbf-2eb3-5c27-a3dd-f7c2dcd9ac18', 'data_vg': 'ceph-04fa4cbf-2eb3-5c27-a3dd-f7c2dcd9ac18'})  2026-02-28 00:45:41.942542 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f45f70cf-4b1a-5b52-bc0a-6a4d28c0a539', 'data_vg': 'ceph-f45f70cf-4b1a-5b52-bc0a-6a4d28c0a539'})  2026-02-28 00:45:41.942553 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:45:41.942564 | orchestrator | 2026-02-28 00:45:41.942574 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-02-28 00:45:41.942585 | orchestrator | Saturday 28 February 2026 00:45:41 +0000 (0:00:00.145) 0:01:14.048 ***** 2026-02-28 00:45:41.942596 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-04fa4cbf-2eb3-5c27-a3dd-f7c2dcd9ac18', 'data_vg': 'ceph-04fa4cbf-2eb3-5c27-a3dd-f7c2dcd9ac18'})  2026-02-28 00:45:41.942607 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f45f70cf-4b1a-5b52-bc0a-6a4d28c0a539', 'data_vg': 'ceph-f45f70cf-4b1a-5b52-bc0a-6a4d28c0a539'})  2026-02-28 00:45:41.942618 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:45:41.942629 | orchestrator | 2026-02-28 00:45:41.942640 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-02-28 00:45:41.942651 | orchestrator | Saturday 28 February 2026 00:45:41 +0000 (0:00:00.149) 0:01:14.197 ***** 2026-02-28 00:45:41.942672 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-04fa4cbf-2eb3-5c27-a3dd-f7c2dcd9ac18', 'data_vg': 'ceph-04fa4cbf-2eb3-5c27-a3dd-f7c2dcd9ac18'})  2026-02-28 00:45:45.126107 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f45f70cf-4b1a-5b52-bc0a-6a4d28c0a539', 'data_vg': 'ceph-f45f70cf-4b1a-5b52-bc0a-6a4d28c0a539'})  2026-02-28 00:45:45.126212 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:45:45.126229 | orchestrator | 2026-02-28 00:45:45.126243 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-02-28 00:45:45.126256 | orchestrator | Saturday 28 February 2026 00:45:42 +0000 (0:00:00.159) 0:01:14.356 ***** 2026-02-28 00:45:45.126293 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-04fa4cbf-2eb3-5c27-a3dd-f7c2dcd9ac18', 'data_vg': 'ceph-04fa4cbf-2eb3-5c27-a3dd-f7c2dcd9ac18'})  2026-02-28 00:45:45.126305 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f45f70cf-4b1a-5b52-bc0a-6a4d28c0a539', 'data_vg': 'ceph-f45f70cf-4b1a-5b52-bc0a-6a4d28c0a539'})  2026-02-28 00:45:45.126317 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:45:45.126328 | orchestrator | 2026-02-28 00:45:45.126339 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-02-28 00:45:45.126351 | orchestrator | Saturday 28 February 2026 00:45:42 +0000 (0:00:00.165) 0:01:14.522 ***** 2026-02-28 00:45:45.126362 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-04fa4cbf-2eb3-5c27-a3dd-f7c2dcd9ac18', 'data_vg': 'ceph-04fa4cbf-2eb3-5c27-a3dd-f7c2dcd9ac18'})  2026-02-28 00:45:45.126386 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f45f70cf-4b1a-5b52-bc0a-6a4d28c0a539', 'data_vg': 'ceph-f45f70cf-4b1a-5b52-bc0a-6a4d28c0a539'})  2026-02-28 00:45:45.126398 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:45:45.126409 | orchestrator | 2026-02-28 00:45:45.126420 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-02-28 00:45:45.126432 | orchestrator | Saturday 28 February 2026 00:45:42 +0000 (0:00:00.171) 0:01:14.693 ***** 2026-02-28 00:45:45.126443 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-04fa4cbf-2eb3-5c27-a3dd-f7c2dcd9ac18', 'data_vg': 'ceph-04fa4cbf-2eb3-5c27-a3dd-f7c2dcd9ac18'})  2026-02-28 00:45:45.126454 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f45f70cf-4b1a-5b52-bc0a-6a4d28c0a539', 'data_vg': 'ceph-f45f70cf-4b1a-5b52-bc0a-6a4d28c0a539'})  2026-02-28 00:45:45.126465 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:45:45.126477 | orchestrator | 2026-02-28 00:45:45.126489 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-02-28 00:45:45.126500 | orchestrator | Saturday 28 February 2026 00:45:42 +0000 (0:00:00.386) 0:01:15.079 ***** 2026-02-28 00:45:45.126511 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-04fa4cbf-2eb3-5c27-a3dd-f7c2dcd9ac18', 'data_vg': 'ceph-04fa4cbf-2eb3-5c27-a3dd-f7c2dcd9ac18'})  2026-02-28 00:45:45.126523 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f45f70cf-4b1a-5b52-bc0a-6a4d28c0a539', 'data_vg': 'ceph-f45f70cf-4b1a-5b52-bc0a-6a4d28c0a539'})  2026-02-28 00:45:45.126536 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:45:45.126548 | orchestrator | 2026-02-28 00:45:45.126560 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-02-28 00:45:45.126573 | orchestrator | Saturday 28 February 2026 00:45:42 +0000 (0:00:00.160) 0:01:15.240 ***** 2026-02-28 00:45:45.126586 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-04fa4cbf-2eb3-5c27-a3dd-f7c2dcd9ac18', 'data_vg': 'ceph-04fa4cbf-2eb3-5c27-a3dd-f7c2dcd9ac18'})  2026-02-28 00:45:45.126599 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f45f70cf-4b1a-5b52-bc0a-6a4d28c0a539', 'data_vg': 'ceph-f45f70cf-4b1a-5b52-bc0a-6a4d28c0a539'})  2026-02-28 00:45:45.126612 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:45:45.126624 | orchestrator | 2026-02-28 00:45:45.126637 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-02-28 00:45:45.126649 | orchestrator | Saturday 28 February 2026 00:45:43 +0000 (0:00:00.178) 0:01:15.419 ***** 2026-02-28 00:45:45.126662 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:45:45.126675 | orchestrator | 2026-02-28 00:45:45.126688 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-02-28 00:45:45.126700 | orchestrator | Saturday 28 February 2026 00:45:43 +0000 (0:00:00.524) 0:01:15.943 ***** 2026-02-28 00:45:45.126712 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:45:45.126725 | orchestrator | 2026-02-28 00:45:45.126737 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-02-28 00:45:45.126758 | orchestrator | Saturday 28 February 2026 00:45:44 +0000 (0:00:00.531) 0:01:16.475 ***** 2026-02-28 00:45:45.126771 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:45:45.126783 | orchestrator | 2026-02-28 00:45:45.126795 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-02-28 00:45:45.126808 | orchestrator | Saturday 28 February 2026 00:45:44 +0000 (0:00:00.140) 0:01:16.616 ***** 2026-02-28 00:45:45.126821 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-04fa4cbf-2eb3-5c27-a3dd-f7c2dcd9ac18', 'vg_name': 'ceph-04fa4cbf-2eb3-5c27-a3dd-f7c2dcd9ac18'}) 2026-02-28 00:45:45.126835 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-f45f70cf-4b1a-5b52-bc0a-6a4d28c0a539', 'vg_name': 'ceph-f45f70cf-4b1a-5b52-bc0a-6a4d28c0a539'}) 2026-02-28 00:45:45.126847 | orchestrator | 2026-02-28 00:45:45.126860 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-02-28 00:45:45.126873 | orchestrator | Saturday 28 February 2026 00:45:44 +0000 (0:00:00.181) 0:01:16.797 ***** 2026-02-28 00:45:45.126904 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-04fa4cbf-2eb3-5c27-a3dd-f7c2dcd9ac18', 'data_vg': 'ceph-04fa4cbf-2eb3-5c27-a3dd-f7c2dcd9ac18'})  2026-02-28 00:45:45.126918 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f45f70cf-4b1a-5b52-bc0a-6a4d28c0a539', 'data_vg': 'ceph-f45f70cf-4b1a-5b52-bc0a-6a4d28c0a539'})  2026-02-28 00:45:45.126931 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:45:45.126943 | orchestrator | 2026-02-28 00:45:45.126954 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-02-28 00:45:45.126965 | orchestrator | Saturday 28 February 2026 00:45:44 +0000 (0:00:00.166) 0:01:16.964 ***** 2026-02-28 00:45:45.126976 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-04fa4cbf-2eb3-5c27-a3dd-f7c2dcd9ac18', 'data_vg': 'ceph-04fa4cbf-2eb3-5c27-a3dd-f7c2dcd9ac18'})  2026-02-28 00:45:45.127025 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f45f70cf-4b1a-5b52-bc0a-6a4d28c0a539', 'data_vg': 'ceph-f45f70cf-4b1a-5b52-bc0a-6a4d28c0a539'})  2026-02-28 00:45:45.127037 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:45:45.127048 | orchestrator | 2026-02-28 00:45:45.127059 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-02-28 00:45:45.127070 | orchestrator | Saturday 28 February 2026 00:45:44 +0000 (0:00:00.164) 0:01:17.129 ***** 2026-02-28 00:45:45.127080 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-04fa4cbf-2eb3-5c27-a3dd-f7c2dcd9ac18', 'data_vg': 'ceph-04fa4cbf-2eb3-5c27-a3dd-f7c2dcd9ac18'})  2026-02-28 00:45:45.127092 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f45f70cf-4b1a-5b52-bc0a-6a4d28c0a539', 'data_vg': 'ceph-f45f70cf-4b1a-5b52-bc0a-6a4d28c0a539'})  2026-02-28 00:45:45.127103 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:45:45.127113 | orchestrator | 2026-02-28 00:45:45.127124 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-02-28 00:45:45.127135 | orchestrator | Saturday 28 February 2026 00:45:44 +0000 (0:00:00.164) 0:01:17.293 ***** 2026-02-28 00:45:45.127146 | orchestrator | ok: [testbed-node-5] => { 2026-02-28 00:45:45.127157 | orchestrator |  "lvm_report": { 2026-02-28 00:45:45.127169 | orchestrator |  "lv": [ 2026-02-28 00:45:45.127180 | orchestrator |  { 2026-02-28 00:45:45.127192 | orchestrator |  "lv_name": "osd-block-04fa4cbf-2eb3-5c27-a3dd-f7c2dcd9ac18", 2026-02-28 00:45:45.127204 | orchestrator |  "vg_name": "ceph-04fa4cbf-2eb3-5c27-a3dd-f7c2dcd9ac18" 2026-02-28 00:45:45.127215 | orchestrator |  }, 2026-02-28 00:45:45.127226 | orchestrator |  { 2026-02-28 00:45:45.127237 | orchestrator |  "lv_name": "osd-block-f45f70cf-4b1a-5b52-bc0a-6a4d28c0a539", 2026-02-28 00:45:45.127248 | orchestrator |  "vg_name": "ceph-f45f70cf-4b1a-5b52-bc0a-6a4d28c0a539" 2026-02-28 00:45:45.127259 | orchestrator |  } 2026-02-28 00:45:45.127270 | orchestrator |  ], 2026-02-28 00:45:45.127281 | orchestrator |  "pv": [ 2026-02-28 00:45:45.127299 | orchestrator |  { 2026-02-28 00:45:45.127310 | orchestrator |  "pv_name": "/dev/sdb", 2026-02-28 00:45:45.127321 | orchestrator |  "vg_name": "ceph-04fa4cbf-2eb3-5c27-a3dd-f7c2dcd9ac18" 2026-02-28 00:45:45.127332 | orchestrator |  }, 2026-02-28 00:45:45.127343 | orchestrator |  { 2026-02-28 00:45:45.127354 | orchestrator |  "pv_name": "/dev/sdc", 2026-02-28 00:45:45.127365 | orchestrator |  "vg_name": "ceph-f45f70cf-4b1a-5b52-bc0a-6a4d28c0a539" 2026-02-28 00:45:45.127376 | orchestrator |  } 2026-02-28 00:45:45.127387 | orchestrator |  ] 2026-02-28 00:45:45.127398 | orchestrator |  } 2026-02-28 00:45:45.127409 | orchestrator | } 2026-02-28 00:45:45.127421 | orchestrator | 2026-02-28 00:45:45.127432 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-28 00:45:45.127443 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-02-28 00:45:45.127454 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-02-28 00:45:45.127465 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-02-28 00:45:45.127476 | orchestrator | 2026-02-28 00:45:45.127487 | orchestrator | 2026-02-28 00:45:45.127498 | orchestrator | 2026-02-28 00:45:45.127509 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-28 00:45:45.127520 | orchestrator | Saturday 28 February 2026 00:45:45 +0000 (0:00:00.148) 0:01:17.442 ***** 2026-02-28 00:45:45.127530 | orchestrator | =============================================================================== 2026-02-28 00:45:45.127541 | orchestrator | Create block VGs -------------------------------------------------------- 5.94s 2026-02-28 00:45:45.127552 | orchestrator | Create block LVs -------------------------------------------------------- 4.12s 2026-02-28 00:45:45.127563 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.85s 2026-02-28 00:45:45.127574 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 1.83s 2026-02-28 00:45:45.127592 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.62s 2026-02-28 00:45:45.127604 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.59s 2026-02-28 00:45:45.127615 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.58s 2026-02-28 00:45:45.127625 | orchestrator | Add known partitions to the list of available block devices ------------- 1.57s 2026-02-28 00:45:45.127643 | orchestrator | Add known links to the list of available block devices ------------------ 1.49s 2026-02-28 00:45:45.540486 | orchestrator | Add known partitions to the list of available block devices ------------- 1.14s 2026-02-28 00:45:45.541626 | orchestrator | Print LVM report data --------------------------------------------------- 0.95s 2026-02-28 00:45:45.541681 | orchestrator | Add known partitions to the list of available block devices ------------- 0.93s 2026-02-28 00:45:45.541693 | orchestrator | Add known links to the list of available block devices ------------------ 0.89s 2026-02-28 00:45:45.541704 | orchestrator | Create DB LVs for ceph_db_devices --------------------------------------- 0.84s 2026-02-28 00:45:45.541715 | orchestrator | Add known partitions to the list of available block devices ------------- 0.84s 2026-02-28 00:45:45.541726 | orchestrator | Calculate size needed for WAL LVs on ceph_db_wal_devices ---------------- 0.83s 2026-02-28 00:45:45.541737 | orchestrator | Print 'Create WAL VGs' -------------------------------------------------- 0.81s 2026-02-28 00:45:45.541748 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.79s 2026-02-28 00:45:45.541759 | orchestrator | Fail if block LV defined in lvm_volumes is missing ---------------------- 0.75s 2026-02-28 00:45:45.541770 | orchestrator | Print 'Create WAL LVs for ceph_wal_devices' ----------------------------- 0.74s 2026-02-28 00:45:57.879227 | orchestrator | 2026-02-28 00:45:57 | INFO  | Prepare task for execution of facts. 2026-02-28 00:45:57.958709 | orchestrator | 2026-02-28 00:45:57 | INFO  | Task a24fa478-a46f-4ab5-915c-4b47caeccbba (facts) was prepared for execution. 2026-02-28 00:45:57.958861 | orchestrator | 2026-02-28 00:45:57 | INFO  | It takes a moment until task a24fa478-a46f-4ab5-915c-4b47caeccbba (facts) has been started and output is visible here. 2026-02-28 00:46:10.730910 | orchestrator | 2026-02-28 00:46:10.731049 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-02-28 00:46:10.731069 | orchestrator | 2026-02-28 00:46:10.731086 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-02-28 00:46:10.731096 | orchestrator | Saturday 28 February 2026 00:46:02 +0000 (0:00:00.281) 0:00:00.281 ***** 2026-02-28 00:46:10.731105 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:46:10.731115 | orchestrator | ok: [testbed-manager] 2026-02-28 00:46:10.731123 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:46:10.731132 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:46:10.731140 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:46:10.731148 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:46:10.731156 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:46:10.731164 | orchestrator | 2026-02-28 00:46:10.731171 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-02-28 00:46:10.731179 | orchestrator | Saturday 28 February 2026 00:46:03 +0000 (0:00:01.083) 0:00:01.365 ***** 2026-02-28 00:46:10.731187 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:46:10.731197 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:46:10.731204 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:46:10.731212 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:46:10.731220 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:46:10.731228 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:46:10.731237 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:46:10.731244 | orchestrator | 2026-02-28 00:46:10.731252 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-02-28 00:46:10.731260 | orchestrator | 2026-02-28 00:46:10.731269 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-02-28 00:46:10.731276 | orchestrator | Saturday 28 February 2026 00:46:04 +0000 (0:00:01.332) 0:00:02.697 ***** 2026-02-28 00:46:10.731285 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:46:10.731292 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:46:10.731300 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:46:10.731308 | orchestrator | ok: [testbed-manager] 2026-02-28 00:46:10.731316 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:46:10.731324 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:46:10.731332 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:46:10.731340 | orchestrator | 2026-02-28 00:46:10.731348 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-02-28 00:46:10.731355 | orchestrator | 2026-02-28 00:46:10.731363 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-02-28 00:46:10.731371 | orchestrator | Saturday 28 February 2026 00:46:09 +0000 (0:00:05.103) 0:00:07.801 ***** 2026-02-28 00:46:10.731379 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:46:10.731387 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:46:10.731395 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:46:10.731403 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:46:10.731411 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:46:10.731419 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:46:10.731427 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:46:10.731435 | orchestrator | 2026-02-28 00:46:10.731443 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-28 00:46:10.731451 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-28 00:46:10.731462 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-28 00:46:10.731496 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-28 00:46:10.731506 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-28 00:46:10.731515 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-28 00:46:10.731525 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-28 00:46:10.731534 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-28 00:46:10.731543 | orchestrator | 2026-02-28 00:46:10.731552 | orchestrator | 2026-02-28 00:46:10.731562 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-28 00:46:10.731571 | orchestrator | Saturday 28 February 2026 00:46:10 +0000 (0:00:00.544) 0:00:08.345 ***** 2026-02-28 00:46:10.731580 | orchestrator | =============================================================================== 2026-02-28 00:46:10.731590 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.10s 2026-02-28 00:46:10.731599 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.33s 2026-02-28 00:46:10.731608 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.08s 2026-02-28 00:46:10.731617 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.54s 2026-02-28 00:46:23.118870 | orchestrator | 2026-02-28 00:46:23 | INFO  | Prepare task for execution of frr. 2026-02-28 00:46:23.193541 | orchestrator | 2026-02-28 00:46:23 | INFO  | Task d443cf33-a0e5-4097-b82c-5378fb3f0113 (frr) was prepared for execution. 2026-02-28 00:46:23.193641 | orchestrator | 2026-02-28 00:46:23 | INFO  | It takes a moment until task d443cf33-a0e5-4097-b82c-5378fb3f0113 (frr) has been started and output is visible here. 2026-02-28 00:46:50.519463 | orchestrator | 2026-02-28 00:46:50.520407 | orchestrator | PLAY [Apply role frr] ********************************************************** 2026-02-28 00:46:50.520443 | orchestrator | 2026-02-28 00:46:50.520457 | orchestrator | TASK [osism.services.frr : Include distribution specific install tasks] ******** 2026-02-28 00:46:50.520469 | orchestrator | Saturday 28 February 2026 00:46:27 +0000 (0:00:00.242) 0:00:00.242 ***** 2026-02-28 00:46:50.520481 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/frr/tasks/install-Debian-family.yml for testbed-manager 2026-02-28 00:46:50.520493 | orchestrator | 2026-02-28 00:46:50.520504 | orchestrator | TASK [osism.services.frr : Pin frr package version] **************************** 2026-02-28 00:46:50.520515 | orchestrator | Saturday 28 February 2026 00:46:27 +0000 (0:00:00.220) 0:00:00.462 ***** 2026-02-28 00:46:50.520525 | orchestrator | changed: [testbed-manager] 2026-02-28 00:46:50.520537 | orchestrator | 2026-02-28 00:46:50.520548 | orchestrator | TASK [osism.services.frr : Install frr package] ******************************** 2026-02-28 00:46:50.520559 | orchestrator | Saturday 28 February 2026 00:46:29 +0000 (0:00:01.295) 0:00:01.757 ***** 2026-02-28 00:46:50.520570 | orchestrator | changed: [testbed-manager] 2026-02-28 00:46:50.520581 | orchestrator | 2026-02-28 00:46:50.520591 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/vtysh.conf] ********************* 2026-02-28 00:46:50.520602 | orchestrator | Saturday 28 February 2026 00:46:38 +0000 (0:00:09.816) 0:00:11.574 ***** 2026-02-28 00:46:50.520613 | orchestrator | ok: [testbed-manager] 2026-02-28 00:46:50.520624 | orchestrator | 2026-02-28 00:46:50.520635 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/daemons] ************************ 2026-02-28 00:46:50.520646 | orchestrator | Saturday 28 February 2026 00:46:39 +0000 (0:00:00.967) 0:00:12.542 ***** 2026-02-28 00:46:50.520657 | orchestrator | changed: [testbed-manager] 2026-02-28 00:46:50.520691 | orchestrator | 2026-02-28 00:46:50.520703 | orchestrator | TASK [osism.services.frr : Set _frr_uplinks fact] ****************************** 2026-02-28 00:46:50.520714 | orchestrator | Saturday 28 February 2026 00:46:40 +0000 (0:00:00.886) 0:00:13.428 ***** 2026-02-28 00:46:50.520725 | orchestrator | ok: [testbed-manager] 2026-02-28 00:46:50.520736 | orchestrator | 2026-02-28 00:46:50.520747 | orchestrator | TASK [osism.services.frr : Write frr_config_template to temporary file] ******** 2026-02-28 00:46:50.520758 | orchestrator | Saturday 28 February 2026 00:46:41 +0000 (0:00:01.107) 0:00:14.536 ***** 2026-02-28 00:46:50.520769 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:46:50.520779 | orchestrator | 2026-02-28 00:46:50.520790 | orchestrator | TASK [osism.services.frr : Render frr.conf from frr_config_template variable] *** 2026-02-28 00:46:50.520801 | orchestrator | Saturday 28 February 2026 00:46:42 +0000 (0:00:00.149) 0:00:14.686 ***** 2026-02-28 00:46:50.520811 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:46:50.520822 | orchestrator | 2026-02-28 00:46:50.520833 | orchestrator | TASK [osism.services.frr : Remove temporary frr_config_template file] ********** 2026-02-28 00:46:50.520844 | orchestrator | Saturday 28 February 2026 00:46:42 +0000 (0:00:00.135) 0:00:14.821 ***** 2026-02-28 00:46:50.520854 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:46:50.520865 | orchestrator | 2026-02-28 00:46:50.520876 | orchestrator | TASK [osism.services.frr : Check for frr.conf file in the configuration repository] *** 2026-02-28 00:46:50.520887 | orchestrator | Saturday 28 February 2026 00:46:42 +0000 (0:00:00.144) 0:00:14.965 ***** 2026-02-28 00:46:50.520898 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:46:50.520908 | orchestrator | 2026-02-28 00:46:50.520919 | orchestrator | TASK [osism.services.frr : Copy frr.conf file from the configuration repository] *** 2026-02-28 00:46:50.520930 | orchestrator | Saturday 28 February 2026 00:46:42 +0000 (0:00:00.133) 0:00:15.098 ***** 2026-02-28 00:46:50.520940 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:46:50.520951 | orchestrator | 2026-02-28 00:46:50.520996 | orchestrator | TASK [osism.services.frr : Copy default frr.conf file of type k3s_cilium] ****** 2026-02-28 00:46:50.521008 | orchestrator | Saturday 28 February 2026 00:46:42 +0000 (0:00:00.145) 0:00:15.244 ***** 2026-02-28 00:46:50.521019 | orchestrator | changed: [testbed-manager] 2026-02-28 00:46:50.521029 | orchestrator | 2026-02-28 00:46:50.521040 | orchestrator | TASK [osism.services.frr : Set sysctl parameters] ****************************** 2026-02-28 00:46:50.521051 | orchestrator | Saturday 28 February 2026 00:46:43 +0000 (0:00:01.075) 0:00:16.320 ***** 2026-02-28 00:46:50.521061 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.ip_forward', 'value': 1}) 2026-02-28 00:46:50.521072 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.send_redirects', 'value': 0}) 2026-02-28 00:46:50.521084 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.accept_redirects', 'value': 0}) 2026-02-28 00:46:50.521095 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.fib_multipath_hash_policy', 'value': 1}) 2026-02-28 00:46:50.521106 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.default.ignore_routes_with_linkdown', 'value': 1}) 2026-02-28 00:46:50.521116 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.rp_filter', 'value': 2}) 2026-02-28 00:46:50.521127 | orchestrator | 2026-02-28 00:46:50.521138 | orchestrator | TASK [osism.services.frr : Manage frr service] ********************************* 2026-02-28 00:46:50.521148 | orchestrator | Saturday 28 February 2026 00:46:46 +0000 (0:00:03.123) 0:00:19.443 ***** 2026-02-28 00:46:50.521159 | orchestrator | ok: [testbed-manager] 2026-02-28 00:46:50.521170 | orchestrator | 2026-02-28 00:46:50.521181 | orchestrator | RUNNING HANDLER [osism.services.frr : Restart frr service] ********************* 2026-02-28 00:46:50.521191 | orchestrator | Saturday 28 February 2026 00:46:47 +0000 (0:00:01.148) 0:00:20.592 ***** 2026-02-28 00:46:50.521202 | orchestrator | changed: [testbed-manager] 2026-02-28 00:46:50.521212 | orchestrator | 2026-02-28 00:46:50.521223 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-28 00:46:50.521242 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-28 00:46:50.521252 | orchestrator | 2026-02-28 00:46:50.521263 | orchestrator | 2026-02-28 00:46:50.521298 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-28 00:46:50.521310 | orchestrator | Saturday 28 February 2026 00:46:50 +0000 (0:00:02.320) 0:00:22.912 ***** 2026-02-28 00:46:50.521321 | orchestrator | =============================================================================== 2026-02-28 00:46:50.521332 | orchestrator | osism.services.frr : Install frr package -------------------------------- 9.82s 2026-02-28 00:46:50.521342 | orchestrator | osism.services.frr : Set sysctl parameters ------------------------------ 3.12s 2026-02-28 00:46:50.521353 | orchestrator | osism.services.frr : Restart frr service -------------------------------- 2.32s 2026-02-28 00:46:50.521364 | orchestrator | osism.services.frr : Pin frr package version ---------------------------- 1.30s 2026-02-28 00:46:50.521375 | orchestrator | osism.services.frr : Manage frr service --------------------------------- 1.15s 2026-02-28 00:46:50.521386 | orchestrator | osism.services.frr : Set _frr_uplinks fact ------------------------------ 1.11s 2026-02-28 00:46:50.521396 | orchestrator | osism.services.frr : Copy default frr.conf file of type k3s_cilium ------ 1.08s 2026-02-28 00:46:50.521407 | orchestrator | osism.services.frr : Copy file: /etc/frr/vtysh.conf --------------------- 0.97s 2026-02-28 00:46:50.521418 | orchestrator | osism.services.frr : Copy file: /etc/frr/daemons ------------------------ 0.89s 2026-02-28 00:46:50.521428 | orchestrator | osism.services.frr : Include distribution specific install tasks -------- 0.22s 2026-02-28 00:46:50.521439 | orchestrator | osism.services.frr : Write frr_config_template to temporary file -------- 0.15s 2026-02-28 00:46:50.521450 | orchestrator | osism.services.frr : Copy frr.conf file from the configuration repository --- 0.15s 2026-02-28 00:46:50.521460 | orchestrator | osism.services.frr : Remove temporary frr_config_template file ---------- 0.14s 2026-02-28 00:46:50.521471 | orchestrator | osism.services.frr : Render frr.conf from frr_config_template variable --- 0.14s 2026-02-28 00:46:50.521482 | orchestrator | osism.services.frr : Check for frr.conf file in the configuration repository --- 0.13s 2026-02-28 00:46:50.743752 | orchestrator | 2026-02-28 00:46:50.744993 | orchestrator | --> DEPLOY IN A NUTSHELL -- START -- Sat Feb 28 00:46:50 UTC 2026 2026-02-28 00:46:50.745019 | orchestrator | 2026-02-28 00:46:52.472688 | orchestrator | 2026-02-28 00:46:52 | INFO  | Collection nutshell is prepared for execution 2026-02-28 00:46:52.472773 | orchestrator | 2026-02-28 00:46:52 | INFO  | A [0] - dotfiles 2026-02-28 00:47:02.505694 | orchestrator | 2026-02-28 00:47:02 | INFO  | A [0] - homer 2026-02-28 00:47:02.505790 | orchestrator | 2026-02-28 00:47:02 | INFO  | A [0] - netdata 2026-02-28 00:47:02.505928 | orchestrator | 2026-02-28 00:47:02 | INFO  | A [0] - openstackclient 2026-02-28 00:47:02.506089 | orchestrator | 2026-02-28 00:47:02 | INFO  | A [0] - phpmyadmin 2026-02-28 00:47:02.506107 | orchestrator | 2026-02-28 00:47:02 | INFO  | A [0] - common 2026-02-28 00:47:02.510416 | orchestrator | 2026-02-28 00:47:02 | INFO  | A [1] -- loadbalancer 2026-02-28 00:47:02.510477 | orchestrator | 2026-02-28 00:47:02 | INFO  | A [2] --- opensearch 2026-02-28 00:47:02.510490 | orchestrator | 2026-02-28 00:47:02 | INFO  | A [2] --- mariadb-ng 2026-02-28 00:47:02.510505 | orchestrator | 2026-02-28 00:47:02 | INFO  | A [3] ---- horizon 2026-02-28 00:47:02.510525 | orchestrator | 2026-02-28 00:47:02 | INFO  | A [3] ---- keystone 2026-02-28 00:47:02.510554 | orchestrator | 2026-02-28 00:47:02 | INFO  | A [4] ----- neutron 2026-02-28 00:47:02.510573 | orchestrator | 2026-02-28 00:47:02 | INFO  | A [5] ------ wait-for-nova 2026-02-28 00:47:02.510751 | orchestrator | 2026-02-28 00:47:02 | INFO  | A [6] ------- octavia 2026-02-28 00:47:02.512623 | orchestrator | 2026-02-28 00:47:02 | INFO  | A [4] ----- barbican 2026-02-28 00:47:02.512862 | orchestrator | 2026-02-28 00:47:02 | INFO  | A [4] ----- designate 2026-02-28 00:47:02.512891 | orchestrator | 2026-02-28 00:47:02 | INFO  | A [4] ----- ironic 2026-02-28 00:47:02.512904 | orchestrator | 2026-02-28 00:47:02 | INFO  | A [4] ----- placement 2026-02-28 00:47:02.512915 | orchestrator | 2026-02-28 00:47:02 | INFO  | A [4] ----- magnum 2026-02-28 00:47:02.513853 | orchestrator | 2026-02-28 00:47:02 | INFO  | A [1] -- openvswitch 2026-02-28 00:47:02.513904 | orchestrator | 2026-02-28 00:47:02 | INFO  | A [2] --- ovn 2026-02-28 00:47:02.514198 | orchestrator | 2026-02-28 00:47:02 | INFO  | A [1] -- memcached 2026-02-28 00:47:02.514425 | orchestrator | 2026-02-28 00:47:02 | INFO  | A [1] -- redis 2026-02-28 00:47:02.514896 | orchestrator | 2026-02-28 00:47:02 | INFO  | A [1] -- rabbitmq-ng 2026-02-28 00:47:02.514992 | orchestrator | 2026-02-28 00:47:02 | INFO  | A [0] - kubernetes 2026-02-28 00:47:02.517499 | orchestrator | 2026-02-28 00:47:02 | INFO  | A [1] -- kubeconfig 2026-02-28 00:47:02.517834 | orchestrator | 2026-02-28 00:47:02 | INFO  | A [1] -- copy-kubeconfig 2026-02-28 00:47:02.517867 | orchestrator | 2026-02-28 00:47:02 | INFO  | A [0] - ceph 2026-02-28 00:47:02.520073 | orchestrator | 2026-02-28 00:47:02 | INFO  | A [1] -- ceph-pools 2026-02-28 00:47:02.520104 | orchestrator | 2026-02-28 00:47:02 | INFO  | A [2] --- copy-ceph-keys 2026-02-28 00:47:02.520116 | orchestrator | 2026-02-28 00:47:02 | INFO  | A [3] ---- cephclient 2026-02-28 00:47:02.520127 | orchestrator | 2026-02-28 00:47:02 | INFO  | A [4] ----- ceph-bootstrap-dashboard 2026-02-28 00:47:02.520419 | orchestrator | 2026-02-28 00:47:02 | INFO  | A [4] ----- wait-for-keystone 2026-02-28 00:47:02.520450 | orchestrator | 2026-02-28 00:47:02 | INFO  | A [5] ------ kolla-ceph-rgw 2026-02-28 00:47:02.520558 | orchestrator | 2026-02-28 00:47:02 | INFO  | A [5] ------ glance 2026-02-28 00:47:02.520997 | orchestrator | 2026-02-28 00:47:02 | INFO  | A [5] ------ cinder 2026-02-28 00:47:02.521022 | orchestrator | 2026-02-28 00:47:02 | INFO  | A [5] ------ nova 2026-02-28 00:47:02.521116 | orchestrator | 2026-02-28 00:47:02 | INFO  | A [4] ----- prometheus 2026-02-28 00:47:02.521285 | orchestrator | 2026-02-28 00:47:02 | INFO  | A [5] ------ grafana 2026-02-28 00:47:02.713527 | orchestrator | 2026-02-28 00:47:02 | INFO  | All tasks of the collection nutshell are prepared for execution 2026-02-28 00:47:02.713607 | orchestrator | 2026-02-28 00:47:02 | INFO  | Tasks are running in the background 2026-02-28 00:47:05.658506 | orchestrator | 2026-02-28 00:47:05 | INFO  | No task IDs specified, wait for all currently running tasks 2026-02-28 00:47:07.774524 | orchestrator | 2026-02-28 00:47:07 | INFO  | Task f2d296b6-0e33-4574-85a0-ce15e7bf129b is in state STARTED 2026-02-28 00:47:07.778153 | orchestrator | 2026-02-28 00:47:07 | INFO  | Task d933a821-fc10-4999-8053-4a77e2a6fee4 is in state STARTED 2026-02-28 00:47:07.780523 | orchestrator | 2026-02-28 00:47:07 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:47:07.780552 | orchestrator | 2026-02-28 00:47:07 | INFO  | Task 5e3655da-e3e6-43f9-a57a-a1c993f58286 is in state STARTED 2026-02-28 00:47:07.780564 | orchestrator | 2026-02-28 00:47:07 | INFO  | Task 4d8376f3-a2b7-449b-8353-fdea1456ef19 is in state STARTED 2026-02-28 00:47:07.780576 | orchestrator | 2026-02-28 00:47:07 | INFO  | Task 3fd264b5-a6f3-4e44-ba1b-6e3cd1b2b7df is in state STARTED 2026-02-28 00:47:07.781146 | orchestrator | 2026-02-28 00:47:07 | INFO  | Task 3f435509-2763-44c1-9603-8e560cc8c02e is in state STARTED 2026-02-28 00:47:07.781189 | orchestrator | 2026-02-28 00:47:07 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:47:10.936339 | orchestrator | 2026-02-28 00:47:10 | INFO  | Task f2d296b6-0e33-4574-85a0-ce15e7bf129b is in state STARTED 2026-02-28 00:47:10.936533 | orchestrator | 2026-02-28 00:47:10 | INFO  | Task d933a821-fc10-4999-8053-4a77e2a6fee4 is in state STARTED 2026-02-28 00:47:10.937122 | orchestrator | 2026-02-28 00:47:10 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:47:10.937604 | orchestrator | 2026-02-28 00:47:10 | INFO  | Task 5e3655da-e3e6-43f9-a57a-a1c993f58286 is in state STARTED 2026-02-28 00:47:10.938128 | orchestrator | 2026-02-28 00:47:10 | INFO  | Task 4d8376f3-a2b7-449b-8353-fdea1456ef19 is in state STARTED 2026-02-28 00:47:10.939763 | orchestrator | 2026-02-28 00:47:10 | INFO  | Task 3fd264b5-a6f3-4e44-ba1b-6e3cd1b2b7df is in state STARTED 2026-02-28 00:47:10.940215 | orchestrator | 2026-02-28 00:47:10 | INFO  | Task 3f435509-2763-44c1-9603-8e560cc8c02e is in state STARTED 2026-02-28 00:47:10.940231 | orchestrator | 2026-02-28 00:47:10 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:47:13.965683 | orchestrator | 2026-02-28 00:47:13 | INFO  | Task f2d296b6-0e33-4574-85a0-ce15e7bf129b is in state STARTED 2026-02-28 00:47:13.965777 | orchestrator | 2026-02-28 00:47:13 | INFO  | Task d933a821-fc10-4999-8053-4a77e2a6fee4 is in state STARTED 2026-02-28 00:47:13.966180 | orchestrator | 2026-02-28 00:47:13 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:47:13.966645 | orchestrator | 2026-02-28 00:47:13 | INFO  | Task 5e3655da-e3e6-43f9-a57a-a1c993f58286 is in state STARTED 2026-02-28 00:47:13.967341 | orchestrator | 2026-02-28 00:47:13 | INFO  | Task 4d8376f3-a2b7-449b-8353-fdea1456ef19 is in state STARTED 2026-02-28 00:47:13.968911 | orchestrator | 2026-02-28 00:47:13 | INFO  | Task 3fd264b5-a6f3-4e44-ba1b-6e3cd1b2b7df is in state STARTED 2026-02-28 00:47:13.969684 | orchestrator | 2026-02-28 00:47:13 | INFO  | Task 3f435509-2763-44c1-9603-8e560cc8c02e is in state STARTED 2026-02-28 00:47:13.969721 | orchestrator | 2026-02-28 00:47:13 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:47:17.140822 | orchestrator | 2026-02-28 00:47:17 | INFO  | Task f2d296b6-0e33-4574-85a0-ce15e7bf129b is in state STARTED 2026-02-28 00:47:17.148763 | orchestrator | 2026-02-28 00:47:17 | INFO  | Task d933a821-fc10-4999-8053-4a77e2a6fee4 is in state STARTED 2026-02-28 00:47:17.149335 | orchestrator | 2026-02-28 00:47:17 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:47:17.153050 | orchestrator | 2026-02-28 00:47:17 | INFO  | Task 5e3655da-e3e6-43f9-a57a-a1c993f58286 is in state STARTED 2026-02-28 00:47:17.173681 | orchestrator | 2026-02-28 00:47:17 | INFO  | Task 4d8376f3-a2b7-449b-8353-fdea1456ef19 is in state STARTED 2026-02-28 00:47:17.173752 | orchestrator | 2026-02-28 00:47:17 | INFO  | Task 3fd264b5-a6f3-4e44-ba1b-6e3cd1b2b7df is in state STARTED 2026-02-28 00:47:17.173763 | orchestrator | 2026-02-28 00:47:17 | INFO  | Task 3f435509-2763-44c1-9603-8e560cc8c02e is in state STARTED 2026-02-28 00:47:17.173775 | orchestrator | 2026-02-28 00:47:17 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:47:20.222873 | orchestrator | 2026-02-28 00:47:20 | INFO  | Task f2d296b6-0e33-4574-85a0-ce15e7bf129b is in state STARTED 2026-02-28 00:47:20.222966 | orchestrator | 2026-02-28 00:47:20 | INFO  | Task d933a821-fc10-4999-8053-4a77e2a6fee4 is in state STARTED 2026-02-28 00:47:20.222983 | orchestrator | 2026-02-28 00:47:20 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:47:20.222992 | orchestrator | 2026-02-28 00:47:20 | INFO  | Task 5e3655da-e3e6-43f9-a57a-a1c993f58286 is in state STARTED 2026-02-28 00:47:20.223023 | orchestrator | 2026-02-28 00:47:20 | INFO  | Task 4d8376f3-a2b7-449b-8353-fdea1456ef19 is in state STARTED 2026-02-28 00:47:20.228170 | orchestrator | 2026-02-28 00:47:20 | INFO  | Task 3fd264b5-a6f3-4e44-ba1b-6e3cd1b2b7df is in state STARTED 2026-02-28 00:47:20.228228 | orchestrator | 2026-02-28 00:47:20 | INFO  | Task 3f435509-2763-44c1-9603-8e560cc8c02e is in state STARTED 2026-02-28 00:47:20.228239 | orchestrator | 2026-02-28 00:47:20 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:47:23.370230 | orchestrator | 2026-02-28 00:47:23 | INFO  | Task f2d296b6-0e33-4574-85a0-ce15e7bf129b is in state STARTED 2026-02-28 00:47:23.370345 | orchestrator | 2026-02-28 00:47:23 | INFO  | Task d933a821-fc10-4999-8053-4a77e2a6fee4 is in state STARTED 2026-02-28 00:47:23.370371 | orchestrator | 2026-02-28 00:47:23 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:47:23.370387 | orchestrator | 2026-02-28 00:47:23 | INFO  | Task 5e3655da-e3e6-43f9-a57a-a1c993f58286 is in state STARTED 2026-02-28 00:47:23.370403 | orchestrator | 2026-02-28 00:47:23 | INFO  | Task 4d8376f3-a2b7-449b-8353-fdea1456ef19 is in state STARTED 2026-02-28 00:47:23.374519 | orchestrator | 2026-02-28 00:47:23 | INFO  | Task 3fd264b5-a6f3-4e44-ba1b-6e3cd1b2b7df is in state STARTED 2026-02-28 00:47:23.374570 | orchestrator | 2026-02-28 00:47:23 | INFO  | Task 3f435509-2763-44c1-9603-8e560cc8c02e is in state STARTED 2026-02-28 00:47:23.374588 | orchestrator | 2026-02-28 00:47:23 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:47:26.510114 | orchestrator | 2026-02-28 00:47:26 | INFO  | Task f2d296b6-0e33-4574-85a0-ce15e7bf129b is in state STARTED 2026-02-28 00:47:26.511009 | orchestrator | 2026-02-28 00:47:26 | INFO  | Task d933a821-fc10-4999-8053-4a77e2a6fee4 is in state STARTED 2026-02-28 00:47:26.513425 | orchestrator | 2026-02-28 00:47:26 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:47:26.515344 | orchestrator | 2026-02-28 00:47:26 | INFO  | Task 5e3655da-e3e6-43f9-a57a-a1c993f58286 is in state STARTED 2026-02-28 00:47:26.516847 | orchestrator | 2026-02-28 00:47:26 | INFO  | Task 4d8376f3-a2b7-449b-8353-fdea1456ef19 is in state STARTED 2026-02-28 00:47:26.518611 | orchestrator | 2026-02-28 00:47:26 | INFO  | Task 3fd264b5-a6f3-4e44-ba1b-6e3cd1b2b7df is in state STARTED 2026-02-28 00:47:26.523163 | orchestrator | 2026-02-28 00:47:26 | INFO  | Task 3f435509-2763-44c1-9603-8e560cc8c02e is in state STARTED 2026-02-28 00:47:26.523264 | orchestrator | 2026-02-28 00:47:26 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:47:29.623308 | orchestrator | 2026-02-28 00:47:29 | INFO  | Task f2d296b6-0e33-4574-85a0-ce15e7bf129b is in state STARTED 2026-02-28 00:47:29.623430 | orchestrator | 2026-02-28 00:47:29 | INFO  | Task d933a821-fc10-4999-8053-4a77e2a6fee4 is in state STARTED 2026-02-28 00:47:29.624147 | orchestrator | 2026-02-28 00:47:29 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:47:29.624683 | orchestrator | 2026-02-28 00:47:29 | INFO  | Task 5e3655da-e3e6-43f9-a57a-a1c993f58286 is in state STARTED 2026-02-28 00:47:29.628490 | orchestrator | 2026-02-28 00:47:29 | INFO  | Task 4d8376f3-a2b7-449b-8353-fdea1456ef19 is in state STARTED 2026-02-28 00:47:29.628900 | orchestrator | 2026-02-28 00:47:29 | INFO  | Task 3fd264b5-a6f3-4e44-ba1b-6e3cd1b2b7df is in state STARTED 2026-02-28 00:47:29.629466 | orchestrator | 2026-02-28 00:47:29 | INFO  | Task 3f435509-2763-44c1-9603-8e560cc8c02e is in state STARTED 2026-02-28 00:47:29.629601 | orchestrator | 2026-02-28 00:47:29 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:47:32.723836 | orchestrator | 2026-02-28 00:47:32 | INFO  | Task f2d296b6-0e33-4574-85a0-ce15e7bf129b is in state STARTED 2026-02-28 00:47:32.728557 | orchestrator | 2026-02-28 00:47:32 | INFO  | Task d933a821-fc10-4999-8053-4a77e2a6fee4 is in state STARTED 2026-02-28 00:47:32.735030 | orchestrator | 2026-02-28 00:47:32 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:47:32.747895 | orchestrator | 2026-02-28 00:47:32 | INFO  | Task 5e3655da-e3e6-43f9-a57a-a1c993f58286 is in state STARTED 2026-02-28 00:47:32.752623 | orchestrator | 2026-02-28 00:47:32 | INFO  | Task 4d8376f3-a2b7-449b-8353-fdea1456ef19 is in state STARTED 2026-02-28 00:47:32.765294 | orchestrator | 2026-02-28 00:47:32.765491 | orchestrator | PLAY [Apply role geerlingguy.dotfiles] ***************************************** 2026-02-28 00:47:32.765512 | orchestrator | 2026-02-28 00:47:32.765524 | orchestrator | TASK [geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally.] **** 2026-02-28 00:47:32.765536 | orchestrator | Saturday 28 February 2026 00:47:16 +0000 (0:00:00.544) 0:00:00.544 ***** 2026-02-28 00:47:32.765547 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:47:32.765559 | orchestrator | changed: [testbed-manager] 2026-02-28 00:47:32.765570 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:47:32.765581 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:47:32.765592 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:47:32.765602 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:47:32.765683 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:47:32.765701 | orchestrator | 2026-02-28 00:47:32.765713 | orchestrator | TASK [geerlingguy.dotfiles : Ensure all configured dotfiles are links.] ******** 2026-02-28 00:47:32.765724 | orchestrator | Saturday 28 February 2026 00:47:20 +0000 (0:00:03.774) 0:00:04.318 ***** 2026-02-28 00:47:32.765736 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2026-02-28 00:47:32.765747 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2026-02-28 00:47:32.765758 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2026-02-28 00:47:32.765769 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2026-02-28 00:47:32.765780 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2026-02-28 00:47:32.765791 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2026-02-28 00:47:32.765802 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2026-02-28 00:47:32.765812 | orchestrator | 2026-02-28 00:47:32.765823 | orchestrator | TASK [geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked.] *** 2026-02-28 00:47:32.765835 | orchestrator | Saturday 28 February 2026 00:47:22 +0000 (0:00:02.656) 0:00:06.975 ***** 2026-02-28 00:47:32.765850 | orchestrator | ok: [testbed-node-1] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-02-28 00:47:21.108101', 'end': '2026-02-28 00:47:21.112252', 'delta': '0:00:00.004151', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-02-28 00:47:32.765873 | orchestrator | ok: [testbed-node-0] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-02-28 00:47:21.183421', 'end': '2026-02-28 00:47:21.188377', 'delta': '0:00:00.004956', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-02-28 00:47:32.765926 | orchestrator | ok: [testbed-manager] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-02-28 00:47:21.276559', 'end': '2026-02-28 00:47:21.282827', 'delta': '0:00:00.006268', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-02-28 00:47:32.766083 | orchestrator | ok: [testbed-node-3] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-02-28 00:47:22.239105', 'end': '2026-02-28 00:47:22.244732', 'delta': '0:00:00.005627', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-02-28 00:47:32.766117 | orchestrator | ok: [testbed-node-2] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-02-28 00:47:21.805678', 'end': '2026-02-28 00:47:21.813082', 'delta': '0:00:00.007404', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-02-28 00:47:32.766139 | orchestrator | ok: [testbed-node-4] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-02-28 00:47:22.496650', 'end': '2026-02-28 00:47:22.506556', 'delta': '0:00:00.009906', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-02-28 00:47:32.766159 | orchestrator | ok: [testbed-node-5] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-02-28 00:47:22.705216', 'end': '2026-02-28 00:47:22.712976', 'delta': '0:00:00.007760', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-02-28 00:47:32.766194 | orchestrator | 2026-02-28 00:47:32.766215 | orchestrator | TASK [geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist.] **** 2026-02-28 00:47:32.766234 | orchestrator | Saturday 28 February 2026 00:47:25 +0000 (0:00:02.940) 0:00:09.915 ***** 2026-02-28 00:47:32.766254 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2026-02-28 00:47:32.766274 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2026-02-28 00:47:32.766292 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2026-02-28 00:47:32.766311 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2026-02-28 00:47:32.766329 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2026-02-28 00:47:32.766358 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2026-02-28 00:47:32.766380 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2026-02-28 00:47:32.766400 | orchestrator | 2026-02-28 00:47:32.766421 | orchestrator | TASK [geerlingguy.dotfiles : Link dotfiles into home folder.] ****************** 2026-02-28 00:47:32.766440 | orchestrator | Saturday 28 February 2026 00:47:27 +0000 (0:00:01.703) 0:00:11.619 ***** 2026-02-28 00:47:32.766457 | orchestrator | changed: [testbed-manager] => (item=.tmux.conf) 2026-02-28 00:47:32.766469 | orchestrator | changed: [testbed-node-0] => (item=.tmux.conf) 2026-02-28 00:47:32.766482 | orchestrator | changed: [testbed-node-1] => (item=.tmux.conf) 2026-02-28 00:47:32.766494 | orchestrator | changed: [testbed-node-2] => (item=.tmux.conf) 2026-02-28 00:47:32.766506 | orchestrator | changed: [testbed-node-3] => (item=.tmux.conf) 2026-02-28 00:47:32.766518 | orchestrator | changed: [testbed-node-4] => (item=.tmux.conf) 2026-02-28 00:47:32.766530 | orchestrator | changed: [testbed-node-5] => (item=.tmux.conf) 2026-02-28 00:47:32.766542 | orchestrator | 2026-02-28 00:47:32.766555 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-28 00:47:32.766579 | orchestrator | testbed-manager : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-28 00:47:32.766593 | orchestrator | testbed-node-0 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-28 00:47:32.766606 | orchestrator | testbed-node-1 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-28 00:47:32.766618 | orchestrator | testbed-node-2 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-28 00:47:32.766631 | orchestrator | testbed-node-3 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-28 00:47:32.766643 | orchestrator | testbed-node-4 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-28 00:47:32.766655 | orchestrator | testbed-node-5 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-28 00:47:32.766667 | orchestrator | 2026-02-28 00:47:32.766679 | orchestrator | 2026-02-28 00:47:32.766692 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-28 00:47:32.766705 | orchestrator | Saturday 28 February 2026 00:47:30 +0000 (0:00:03.145) 0:00:14.764 ***** 2026-02-28 00:47:32.766717 | orchestrator | =============================================================================== 2026-02-28 00:47:32.766728 | orchestrator | geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally. ---- 3.77s 2026-02-28 00:47:32.766747 | orchestrator | geerlingguy.dotfiles : Link dotfiles into home folder. ------------------ 3.15s 2026-02-28 00:47:32.766758 | orchestrator | geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked. --- 2.94s 2026-02-28 00:47:32.766769 | orchestrator | geerlingguy.dotfiles : Ensure all configured dotfiles are links. -------- 2.66s 2026-02-28 00:47:32.766780 | orchestrator | geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist. ---- 1.70s 2026-02-28 00:47:32.766791 | orchestrator | 2026-02-28 00:47:32 | INFO  | Task 3fd264b5-a6f3-4e44-ba1b-6e3cd1b2b7df is in state SUCCESS 2026-02-28 00:47:32.766803 | orchestrator | 2026-02-28 00:47:32 | INFO  | Task 3f435509-2763-44c1-9603-8e560cc8c02e is in state STARTED 2026-02-28 00:47:32.766814 | orchestrator | 2026-02-28 00:47:32 | INFO  | Task 399ced28-2a01-42c9-b976-1444b3ccde84 is in state STARTED 2026-02-28 00:47:32.766825 | orchestrator | 2026-02-28 00:47:32 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:47:35.870561 | orchestrator | 2026-02-28 00:47:35 | INFO  | Task f2d296b6-0e33-4574-85a0-ce15e7bf129b is in state STARTED 2026-02-28 00:47:35.871385 | orchestrator | 2026-02-28 00:47:35 | INFO  | Task d933a821-fc10-4999-8053-4a77e2a6fee4 is in state STARTED 2026-02-28 00:47:35.872395 | orchestrator | 2026-02-28 00:47:35 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:47:35.874086 | orchestrator | 2026-02-28 00:47:35 | INFO  | Task 5e3655da-e3e6-43f9-a57a-a1c993f58286 is in state STARTED 2026-02-28 00:47:35.874966 | orchestrator | 2026-02-28 00:47:35 | INFO  | Task 4d8376f3-a2b7-449b-8353-fdea1456ef19 is in state STARTED 2026-02-28 00:47:35.876119 | orchestrator | 2026-02-28 00:47:35 | INFO  | Task 3f435509-2763-44c1-9603-8e560cc8c02e is in state STARTED 2026-02-28 00:47:35.876922 | orchestrator | 2026-02-28 00:47:35 | INFO  | Task 399ced28-2a01-42c9-b976-1444b3ccde84 is in state STARTED 2026-02-28 00:47:35.877139 | orchestrator | 2026-02-28 00:47:35 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:47:39.063691 | orchestrator | 2026-02-28 00:47:39 | INFO  | Task f2d296b6-0e33-4574-85a0-ce15e7bf129b is in state STARTED 2026-02-28 00:47:39.064454 | orchestrator | 2026-02-28 00:47:39 | INFO  | Task d933a821-fc10-4999-8053-4a77e2a6fee4 is in state STARTED 2026-02-28 00:47:39.065798 | orchestrator | 2026-02-28 00:47:39 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:47:39.066401 | orchestrator | 2026-02-28 00:47:39 | INFO  | Task 5e3655da-e3e6-43f9-a57a-a1c993f58286 is in state STARTED 2026-02-28 00:47:39.067345 | orchestrator | 2026-02-28 00:47:39 | INFO  | Task 4d8376f3-a2b7-449b-8353-fdea1456ef19 is in state STARTED 2026-02-28 00:47:39.068183 | orchestrator | 2026-02-28 00:47:39 | INFO  | Task 3f435509-2763-44c1-9603-8e560cc8c02e is in state STARTED 2026-02-28 00:47:39.068872 | orchestrator | 2026-02-28 00:47:39 | INFO  | Task 399ced28-2a01-42c9-b976-1444b3ccde84 is in state STARTED 2026-02-28 00:47:39.068896 | orchestrator | 2026-02-28 00:47:39 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:47:42.354012 | orchestrator | 2026-02-28 00:47:42 | INFO  | Task f2d296b6-0e33-4574-85a0-ce15e7bf129b is in state STARTED 2026-02-28 00:47:42.354555 | orchestrator | 2026-02-28 00:47:42 | INFO  | Task d933a821-fc10-4999-8053-4a77e2a6fee4 is in state STARTED 2026-02-28 00:47:42.354571 | orchestrator | 2026-02-28 00:47:42 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:47:42.354584 | orchestrator | 2026-02-28 00:47:42 | INFO  | Task 5e3655da-e3e6-43f9-a57a-a1c993f58286 is in state STARTED 2026-02-28 00:47:42.354596 | orchestrator | 2026-02-28 00:47:42 | INFO  | Task 4d8376f3-a2b7-449b-8353-fdea1456ef19 is in state STARTED 2026-02-28 00:47:42.354637 | orchestrator | 2026-02-28 00:47:42 | INFO  | Task 3f435509-2763-44c1-9603-8e560cc8c02e is in state STARTED 2026-02-28 00:47:42.354648 | orchestrator | 2026-02-28 00:47:42 | INFO  | Task 399ced28-2a01-42c9-b976-1444b3ccde84 is in state STARTED 2026-02-28 00:47:42.354658 | orchestrator | 2026-02-28 00:47:42 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:47:45.547081 | orchestrator | 2026-02-28 00:47:45 | INFO  | Task f2d296b6-0e33-4574-85a0-ce15e7bf129b is in state STARTED 2026-02-28 00:47:45.554574 | orchestrator | 2026-02-28 00:47:45 | INFO  | Task d933a821-fc10-4999-8053-4a77e2a6fee4 is in state STARTED 2026-02-28 00:47:45.560377 | orchestrator | 2026-02-28 00:47:45 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:47:45.569531 | orchestrator | 2026-02-28 00:47:45 | INFO  | Task 5e3655da-e3e6-43f9-a57a-a1c993f58286 is in state STARTED 2026-02-28 00:47:45.575627 | orchestrator | 2026-02-28 00:47:45 | INFO  | Task 4d8376f3-a2b7-449b-8353-fdea1456ef19 is in state STARTED 2026-02-28 00:47:45.586760 | orchestrator | 2026-02-28 00:47:45 | INFO  | Task 3f435509-2763-44c1-9603-8e560cc8c02e is in state STARTED 2026-02-28 00:47:45.590596 | orchestrator | 2026-02-28 00:47:45 | INFO  | Task 399ced28-2a01-42c9-b976-1444b3ccde84 is in state STARTED 2026-02-28 00:47:45.590642 | orchestrator | 2026-02-28 00:47:45 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:47:48.678465 | orchestrator | 2026-02-28 00:47:48 | INFO  | Task f2d296b6-0e33-4574-85a0-ce15e7bf129b is in state STARTED 2026-02-28 00:47:48.679054 | orchestrator | 2026-02-28 00:47:48 | INFO  | Task d933a821-fc10-4999-8053-4a77e2a6fee4 is in state STARTED 2026-02-28 00:47:48.680328 | orchestrator | 2026-02-28 00:47:48 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:47:48.681434 | orchestrator | 2026-02-28 00:47:48 | INFO  | Task 5e3655da-e3e6-43f9-a57a-a1c993f58286 is in state STARTED 2026-02-28 00:47:48.682414 | orchestrator | 2026-02-28 00:47:48 | INFO  | Task 4d8376f3-a2b7-449b-8353-fdea1456ef19 is in state STARTED 2026-02-28 00:47:48.685013 | orchestrator | 2026-02-28 00:47:48 | INFO  | Task 3f435509-2763-44c1-9603-8e560cc8c02e is in state STARTED 2026-02-28 00:47:48.685836 | orchestrator | 2026-02-28 00:47:48 | INFO  | Task 399ced28-2a01-42c9-b976-1444b3ccde84 is in state STARTED 2026-02-28 00:47:48.685895 | orchestrator | 2026-02-28 00:47:48 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:47:51.770525 | orchestrator | 2026-02-28 00:47:51 | INFO  | Task f2d296b6-0e33-4574-85a0-ce15e7bf129b is in state STARTED 2026-02-28 00:47:51.771845 | orchestrator | 2026-02-28 00:47:51 | INFO  | Task d933a821-fc10-4999-8053-4a77e2a6fee4 is in state STARTED 2026-02-28 00:47:51.891221 | orchestrator | 2026-02-28 00:47:51 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:47:51.891308 | orchestrator | 2026-02-28 00:47:51 | INFO  | Task 5e3655da-e3e6-43f9-a57a-a1c993f58286 is in state STARTED 2026-02-28 00:47:51.891318 | orchestrator | 2026-02-28 00:47:51 | INFO  | Task 4d8376f3-a2b7-449b-8353-fdea1456ef19 is in state STARTED 2026-02-28 00:47:51.891325 | orchestrator | 2026-02-28 00:47:51 | INFO  | Task 3f435509-2763-44c1-9603-8e560cc8c02e is in state STARTED 2026-02-28 00:47:51.891333 | orchestrator | 2026-02-28 00:47:51 | INFO  | Task 399ced28-2a01-42c9-b976-1444b3ccde84 is in state STARTED 2026-02-28 00:47:51.891341 | orchestrator | 2026-02-28 00:47:51 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:47:54.897518 | orchestrator | 2026-02-28 00:47:54 | INFO  | Task f2d296b6-0e33-4574-85a0-ce15e7bf129b is in state STARTED 2026-02-28 00:47:54.897673 | orchestrator | 2026-02-28 00:47:54 | INFO  | Task d933a821-fc10-4999-8053-4a77e2a6fee4 is in state STARTED 2026-02-28 00:47:54.897698 | orchestrator | 2026-02-28 00:47:54 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:47:54.897716 | orchestrator | 2026-02-28 00:47:54 | INFO  | Task 5e3655da-e3e6-43f9-a57a-a1c993f58286 is in state STARTED 2026-02-28 00:47:54.897733 | orchestrator | 2026-02-28 00:47:54 | INFO  | Task 4d8376f3-a2b7-449b-8353-fdea1456ef19 is in state STARTED 2026-02-28 00:47:54.897751 | orchestrator | 2026-02-28 00:47:54 | INFO  | Task 3f435509-2763-44c1-9603-8e560cc8c02e is in state STARTED 2026-02-28 00:47:54.897786 | orchestrator | 2026-02-28 00:47:54 | INFO  | Task 399ced28-2a01-42c9-b976-1444b3ccde84 is in state STARTED 2026-02-28 00:47:54.897816 | orchestrator | 2026-02-28 00:47:54 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:47:58.170512 | orchestrator | 2026-02-28 00:47:58 | INFO  | Task f2d296b6-0e33-4574-85a0-ce15e7bf129b is in state STARTED 2026-02-28 00:47:58.172643 | orchestrator | 2026-02-28 00:47:58 | INFO  | Task d933a821-fc10-4999-8053-4a77e2a6fee4 is in state STARTED 2026-02-28 00:47:58.175178 | orchestrator | 2026-02-28 00:47:58 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:47:58.178628 | orchestrator | 2026-02-28 00:47:58 | INFO  | Task 5e3655da-e3e6-43f9-a57a-a1c993f58286 is in state STARTED 2026-02-28 00:47:58.181190 | orchestrator | 2026-02-28 00:47:58 | INFO  | Task 4d8376f3-a2b7-449b-8353-fdea1456ef19 is in state STARTED 2026-02-28 00:47:58.184483 | orchestrator | 2026-02-28 00:47:58 | INFO  | Task 3f435509-2763-44c1-9603-8e560cc8c02e is in state SUCCESS 2026-02-28 00:47:58.187337 | orchestrator | 2026-02-28 00:47:58 | INFO  | Task 399ced28-2a01-42c9-b976-1444b3ccde84 is in state STARTED 2026-02-28 00:47:58.187397 | orchestrator | 2026-02-28 00:47:58 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:48:01.369322 | orchestrator | 2026-02-28 00:48:01 | INFO  | Task f2d296b6-0e33-4574-85a0-ce15e7bf129b is in state STARTED 2026-02-28 00:48:01.369427 | orchestrator | 2026-02-28 00:48:01 | INFO  | Task d933a821-fc10-4999-8053-4a77e2a6fee4 is in state STARTED 2026-02-28 00:48:01.371805 | orchestrator | 2026-02-28 00:48:01 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:48:01.376774 | orchestrator | 2026-02-28 00:48:01 | INFO  | Task 5e3655da-e3e6-43f9-a57a-a1c993f58286 is in state STARTED 2026-02-28 00:48:01.376841 | orchestrator | 2026-02-28 00:48:01 | INFO  | Task 4d8376f3-a2b7-449b-8353-fdea1456ef19 is in state STARTED 2026-02-28 00:48:01.377225 | orchestrator | 2026-02-28 00:48:01 | INFO  | Task 399ced28-2a01-42c9-b976-1444b3ccde84 is in state STARTED 2026-02-28 00:48:01.377243 | orchestrator | 2026-02-28 00:48:01 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:48:04.427117 | orchestrator | 2026-02-28 00:48:04 | INFO  | Task f2d296b6-0e33-4574-85a0-ce15e7bf129b is in state STARTED 2026-02-28 00:48:04.428446 | orchestrator | 2026-02-28 00:48:04 | INFO  | Task d933a821-fc10-4999-8053-4a77e2a6fee4 is in state STARTED 2026-02-28 00:48:04.429521 | orchestrator | 2026-02-28 00:48:04 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:48:04.430399 | orchestrator | 2026-02-28 00:48:04 | INFO  | Task 5e3655da-e3e6-43f9-a57a-a1c993f58286 is in state STARTED 2026-02-28 00:48:04.432009 | orchestrator | 2026-02-28 00:48:04 | INFO  | Task 4d8376f3-a2b7-449b-8353-fdea1456ef19 is in state STARTED 2026-02-28 00:48:04.436230 | orchestrator | 2026-02-28 00:48:04 | INFO  | Task 399ced28-2a01-42c9-b976-1444b3ccde84 is in state STARTED 2026-02-28 00:48:04.436320 | orchestrator | 2026-02-28 00:48:04 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:48:07.508486 | orchestrator | 2026-02-28 00:48:07 | INFO  | Task f2d296b6-0e33-4574-85a0-ce15e7bf129b is in state STARTED 2026-02-28 00:48:07.510372 | orchestrator | 2026-02-28 00:48:07 | INFO  | Task d933a821-fc10-4999-8053-4a77e2a6fee4 is in state STARTED 2026-02-28 00:48:07.510430 | orchestrator | 2026-02-28 00:48:07 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:48:07.512240 | orchestrator | 2026-02-28 00:48:07 | INFO  | Task 5e3655da-e3e6-43f9-a57a-a1c993f58286 is in state STARTED 2026-02-28 00:48:07.515186 | orchestrator | 2026-02-28 00:48:07 | INFO  | Task 4d8376f3-a2b7-449b-8353-fdea1456ef19 is in state STARTED 2026-02-28 00:48:07.519297 | orchestrator | 2026-02-28 00:48:07 | INFO  | Task 399ced28-2a01-42c9-b976-1444b3ccde84 is in state STARTED 2026-02-28 00:48:07.519362 | orchestrator | 2026-02-28 00:48:07 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:48:10.682003 | orchestrator | 2026-02-28 00:48:10 | INFO  | Task f2d296b6-0e33-4574-85a0-ce15e7bf129b is in state STARTED 2026-02-28 00:48:10.682191 | orchestrator | 2026-02-28 00:48:10 | INFO  | Task d933a821-fc10-4999-8053-4a77e2a6fee4 is in state SUCCESS 2026-02-28 00:48:10.684547 | orchestrator | 2026-02-28 00:48:10 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:48:10.686536 | orchestrator | 2026-02-28 00:48:10 | INFO  | Task 5e3655da-e3e6-43f9-a57a-a1c993f58286 is in state STARTED 2026-02-28 00:48:10.687609 | orchestrator | 2026-02-28 00:48:10 | INFO  | Task 4d8376f3-a2b7-449b-8353-fdea1456ef19 is in state STARTED 2026-02-28 00:48:10.688512 | orchestrator | 2026-02-28 00:48:10 | INFO  | Task 399ced28-2a01-42c9-b976-1444b3ccde84 is in state STARTED 2026-02-28 00:48:10.688547 | orchestrator | 2026-02-28 00:48:10 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:48:13.751297 | orchestrator | 2026-02-28 00:48:13 | INFO  | Task f2d296b6-0e33-4574-85a0-ce15e7bf129b is in state STARTED 2026-02-28 00:48:13.751391 | orchestrator | 2026-02-28 00:48:13 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:48:13.752750 | orchestrator | 2026-02-28 00:48:13 | INFO  | Task 5e3655da-e3e6-43f9-a57a-a1c993f58286 is in state STARTED 2026-02-28 00:48:13.753827 | orchestrator | 2026-02-28 00:48:13 | INFO  | Task 4d8376f3-a2b7-449b-8353-fdea1456ef19 is in state STARTED 2026-02-28 00:48:13.755224 | orchestrator | 2026-02-28 00:48:13 | INFO  | Task 399ced28-2a01-42c9-b976-1444b3ccde84 is in state STARTED 2026-02-28 00:48:13.755375 | orchestrator | 2026-02-28 00:48:13 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:48:16.879479 | orchestrator | 2026-02-28 00:48:16 | INFO  | Task f2d296b6-0e33-4574-85a0-ce15e7bf129b is in state STARTED 2026-02-28 00:48:16.879577 | orchestrator | 2026-02-28 00:48:16 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:48:16.880156 | orchestrator | 2026-02-28 00:48:16 | INFO  | Task 5e3655da-e3e6-43f9-a57a-a1c993f58286 is in state STARTED 2026-02-28 00:48:16.881471 | orchestrator | 2026-02-28 00:48:16 | INFO  | Task 4d8376f3-a2b7-449b-8353-fdea1456ef19 is in state STARTED 2026-02-28 00:48:16.883776 | orchestrator | 2026-02-28 00:48:16 | INFO  | Task 399ced28-2a01-42c9-b976-1444b3ccde84 is in state STARTED 2026-02-28 00:48:16.883828 | orchestrator | 2026-02-28 00:48:16 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:48:19.932769 | orchestrator | 2026-02-28 00:48:19 | INFO  | Task f2d296b6-0e33-4574-85a0-ce15e7bf129b is in state STARTED 2026-02-28 00:48:19.932859 | orchestrator | 2026-02-28 00:48:19 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:48:19.932870 | orchestrator | 2026-02-28 00:48:19 | INFO  | Task 5e3655da-e3e6-43f9-a57a-a1c993f58286 is in state STARTED 2026-02-28 00:48:19.932879 | orchestrator | 2026-02-28 00:48:19 | INFO  | Task 4d8376f3-a2b7-449b-8353-fdea1456ef19 is in state STARTED 2026-02-28 00:48:19.932887 | orchestrator | 2026-02-28 00:48:19 | INFO  | Task 399ced28-2a01-42c9-b976-1444b3ccde84 is in state STARTED 2026-02-28 00:48:19.932895 | orchestrator | 2026-02-28 00:48:19 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:48:22.968744 | orchestrator | 2026-02-28 00:48:22 | INFO  | Task f2d296b6-0e33-4574-85a0-ce15e7bf129b is in state STARTED 2026-02-28 00:48:22.969172 | orchestrator | 2026-02-28 00:48:22 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:48:22.972272 | orchestrator | 2026-02-28 00:48:22 | INFO  | Task 5e3655da-e3e6-43f9-a57a-a1c993f58286 is in state STARTED 2026-02-28 00:48:22.974195 | orchestrator | 2026-02-28 00:48:22 | INFO  | Task 4d8376f3-a2b7-449b-8353-fdea1456ef19 is in state STARTED 2026-02-28 00:48:22.979284 | orchestrator | 2026-02-28 00:48:22 | INFO  | Task 399ced28-2a01-42c9-b976-1444b3ccde84 is in state STARTED 2026-02-28 00:48:22.979342 | orchestrator | 2026-02-28 00:48:22 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:48:26.088050 | orchestrator | 2026-02-28 00:48:26 | INFO  | Task f2d296b6-0e33-4574-85a0-ce15e7bf129b is in state STARTED 2026-02-28 00:48:26.088877 | orchestrator | 2026-02-28 00:48:26 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:48:26.090222 | orchestrator | 2026-02-28 00:48:26 | INFO  | Task 5e3655da-e3e6-43f9-a57a-a1c993f58286 is in state STARTED 2026-02-28 00:48:26.093202 | orchestrator | 2026-02-28 00:48:26 | INFO  | Task 4d8376f3-a2b7-449b-8353-fdea1456ef19 is in state STARTED 2026-02-28 00:48:26.095470 | orchestrator | 2026-02-28 00:48:26 | INFO  | Task 399ced28-2a01-42c9-b976-1444b3ccde84 is in state STARTED 2026-02-28 00:48:26.095535 | orchestrator | 2026-02-28 00:48:26 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:48:29.149038 | orchestrator | 2026-02-28 00:48:29 | INFO  | Task f2d296b6-0e33-4574-85a0-ce15e7bf129b is in state STARTED 2026-02-28 00:48:29.149595 | orchestrator | 2026-02-28 00:48:29 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:48:29.152033 | orchestrator | 2026-02-28 00:48:29 | INFO  | Task 5e3655da-e3e6-43f9-a57a-a1c993f58286 is in state STARTED 2026-02-28 00:48:29.156032 | orchestrator | 2026-02-28 00:48:29 | INFO  | Task 4d8376f3-a2b7-449b-8353-fdea1456ef19 is in state STARTED 2026-02-28 00:48:29.157021 | orchestrator | 2026-02-28 00:48:29 | INFO  | Task 399ced28-2a01-42c9-b976-1444b3ccde84 is in state STARTED 2026-02-28 00:48:29.157054 | orchestrator | 2026-02-28 00:48:29 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:48:32.217086 | orchestrator | 2026-02-28 00:48:32 | INFO  | Task f2d296b6-0e33-4574-85a0-ce15e7bf129b is in state STARTED 2026-02-28 00:48:32.218256 | orchestrator | 2026-02-28 00:48:32 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:48:32.219415 | orchestrator | 2026-02-28 00:48:32 | INFO  | Task 5e3655da-e3e6-43f9-a57a-a1c993f58286 is in state STARTED 2026-02-28 00:48:32.221000 | orchestrator | 2026-02-28 00:48:32 | INFO  | Task 4d8376f3-a2b7-449b-8353-fdea1456ef19 is in state STARTED 2026-02-28 00:48:32.223659 | orchestrator | 2026-02-28 00:48:32 | INFO  | Task 399ced28-2a01-42c9-b976-1444b3ccde84 is in state STARTED 2026-02-28 00:48:32.223707 | orchestrator | 2026-02-28 00:48:32 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:48:35.262190 | orchestrator | 2026-02-28 00:48:35 | INFO  | Task f2d296b6-0e33-4574-85a0-ce15e7bf129b is in state STARTED 2026-02-28 00:48:35.262291 | orchestrator | 2026-02-28 00:48:35 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:48:35.262319 | orchestrator | 2026-02-28 00:48:35 | INFO  | Task 5e3655da-e3e6-43f9-a57a-a1c993f58286 is in state STARTED 2026-02-28 00:48:35.263782 | orchestrator | 2026-02-28 00:48:35 | INFO  | Task 4d8376f3-a2b7-449b-8353-fdea1456ef19 is in state STARTED 2026-02-28 00:48:35.266882 | orchestrator | 2026-02-28 00:48:35 | INFO  | Task 399ced28-2a01-42c9-b976-1444b3ccde84 is in state STARTED 2026-02-28 00:48:35.267009 | orchestrator | 2026-02-28 00:48:35 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:48:38.312222 | orchestrator | 2026-02-28 00:48:38 | INFO  | Task f2d296b6-0e33-4574-85a0-ce15e7bf129b is in state STARTED 2026-02-28 00:48:38.312313 | orchestrator | 2026-02-28 00:48:38 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:48:38.312325 | orchestrator | 2026-02-28 00:48:38 | INFO  | Task 5e3655da-e3e6-43f9-a57a-a1c993f58286 is in state STARTED 2026-02-28 00:48:38.314596 | orchestrator | 2026-02-28 00:48:38 | INFO  | Task 4d8376f3-a2b7-449b-8353-fdea1456ef19 is in state STARTED 2026-02-28 00:48:38.316606 | orchestrator | 2026-02-28 00:48:38 | INFO  | Task 399ced28-2a01-42c9-b976-1444b3ccde84 is in state STARTED 2026-02-28 00:48:38.316646 | orchestrator | 2026-02-28 00:48:38 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:48:41.352840 | orchestrator | 2026-02-28 00:48:41 | INFO  | Task f2d296b6-0e33-4574-85a0-ce15e7bf129b is in state STARTED 2026-02-28 00:48:41.353908 | orchestrator | 2026-02-28 00:48:41 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:48:41.358119 | orchestrator | 2026-02-28 00:48:41 | INFO  | Task 5e3655da-e3e6-43f9-a57a-a1c993f58286 is in state STARTED 2026-02-28 00:48:41.362177 | orchestrator | 2026-02-28 00:48:41 | INFO  | Task 4d8376f3-a2b7-449b-8353-fdea1456ef19 is in state STARTED 2026-02-28 00:48:41.362236 | orchestrator | 2026-02-28 00:48:41 | INFO  | Task 399ced28-2a01-42c9-b976-1444b3ccde84 is in state STARTED 2026-02-28 00:48:41.362247 | orchestrator | 2026-02-28 00:48:41 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:48:44.426728 | orchestrator | 2026-02-28 00:48:44 | INFO  | Task f2d296b6-0e33-4574-85a0-ce15e7bf129b is in state STARTED 2026-02-28 00:48:44.428833 | orchestrator | 2026-02-28 00:48:44 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:48:44.429682 | orchestrator | 2026-02-28 00:48:44 | INFO  | Task 5e3655da-e3e6-43f9-a57a-a1c993f58286 is in state STARTED 2026-02-28 00:48:44.436788 | orchestrator | 2026-02-28 00:48:44 | INFO  | Task 4d8376f3-a2b7-449b-8353-fdea1456ef19 is in state STARTED 2026-02-28 00:48:44.444824 | orchestrator | 2026-02-28 00:48:44 | INFO  | Task 399ced28-2a01-42c9-b976-1444b3ccde84 is in state STARTED 2026-02-28 00:48:44.444945 | orchestrator | 2026-02-28 00:48:44 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:48:47.506793 | orchestrator | 2026-02-28 00:48:47 | INFO  | Task f2d296b6-0e33-4574-85a0-ce15e7bf129b is in state STARTED 2026-02-28 00:48:47.509564 | orchestrator | 2026-02-28 00:48:47 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:48:47.513564 | orchestrator | 2026-02-28 00:48:47 | INFO  | Task 5e3655da-e3e6-43f9-a57a-a1c993f58286 is in state STARTED 2026-02-28 00:48:47.518225 | orchestrator | 2026-02-28 00:48:47 | INFO  | Task 4d8376f3-a2b7-449b-8353-fdea1456ef19 is in state STARTED 2026-02-28 00:48:47.519872 | orchestrator | 2026-02-28 00:48:47 | INFO  | Task 399ced28-2a01-42c9-b976-1444b3ccde84 is in state STARTED 2026-02-28 00:48:47.520377 | orchestrator | 2026-02-28 00:48:47 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:48:50.587345 | orchestrator | 2026-02-28 00:48:50 | INFO  | Task f2d296b6-0e33-4574-85a0-ce15e7bf129b is in state STARTED 2026-02-28 00:48:50.587427 | orchestrator | 2026-02-28 00:48:50 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:48:50.587440 | orchestrator | 2026-02-28 00:48:50 | INFO  | Task 5e3655da-e3e6-43f9-a57a-a1c993f58286 is in state STARTED 2026-02-28 00:48:50.588371 | orchestrator | 2026-02-28 00:48:50 | INFO  | Task 4d8376f3-a2b7-449b-8353-fdea1456ef19 is in state STARTED 2026-02-28 00:48:50.590114 | orchestrator | 2026-02-28 00:48:50 | INFO  | Task 399ced28-2a01-42c9-b976-1444b3ccde84 is in state STARTED 2026-02-28 00:48:50.590167 | orchestrator | 2026-02-28 00:48:50 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:48:53.632249 | orchestrator | 2026-02-28 00:48:53 | INFO  | Task f2d296b6-0e33-4574-85a0-ce15e7bf129b is in state STARTED 2026-02-28 00:48:53.636229 | orchestrator | 2026-02-28 00:48:53 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:48:53.639887 | orchestrator | 2026-02-28 00:48:53 | INFO  | Task 5e3655da-e3e6-43f9-a57a-a1c993f58286 is in state SUCCESS 2026-02-28 00:48:53.640780 | orchestrator | 2026-02-28 00:48:53.640813 | orchestrator | 2026-02-28 00:48:53.640819 | orchestrator | PLAY [Apply role homer] ******************************************************** 2026-02-28 00:48:53.640827 | orchestrator | 2026-02-28 00:48:53.640832 | orchestrator | TASK [osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards] *** 2026-02-28 00:48:53.640838 | orchestrator | Saturday 28 February 2026 00:47:14 +0000 (0:00:00.499) 0:00:00.499 ***** 2026-02-28 00:48:53.640844 | orchestrator | ok: [testbed-manager] => { 2026-02-28 00:48:53.640851 | orchestrator |  "msg": "The support for the homer_url_kibana has been removed. Please use the homer_url_opensearch_dashboards parameter." 2026-02-28 00:48:53.640858 | orchestrator | } 2026-02-28 00:48:53.640864 | orchestrator | 2026-02-28 00:48:53.640869 | orchestrator | TASK [osism.services.homer : Create traefik external network] ****************** 2026-02-28 00:48:53.640874 | orchestrator | Saturday 28 February 2026 00:47:15 +0000 (0:00:00.423) 0:00:00.923 ***** 2026-02-28 00:48:53.640879 | orchestrator | ok: [testbed-manager] 2026-02-28 00:48:53.640885 | orchestrator | 2026-02-28 00:48:53.640890 | orchestrator | TASK [osism.services.homer : Create required directories] ********************** 2026-02-28 00:48:53.640894 | orchestrator | Saturday 28 February 2026 00:47:17 +0000 (0:00:02.370) 0:00:03.294 ***** 2026-02-28 00:48:53.640899 | orchestrator | changed: [testbed-manager] => (item=/opt/homer/configuration) 2026-02-28 00:48:53.640905 | orchestrator | ok: [testbed-manager] => (item=/opt/homer) 2026-02-28 00:48:53.640909 | orchestrator | 2026-02-28 00:48:53.640914 | orchestrator | TASK [osism.services.homer : Copy config.yml configuration file] *************** 2026-02-28 00:48:53.640951 | orchestrator | Saturday 28 February 2026 00:47:19 +0000 (0:00:01.570) 0:00:04.864 ***** 2026-02-28 00:48:53.640957 | orchestrator | changed: [testbed-manager] 2026-02-28 00:48:53.640962 | orchestrator | 2026-02-28 00:48:53.640967 | orchestrator | TASK [osism.services.homer : Copy docker-compose.yml file] ********************* 2026-02-28 00:48:53.640972 | orchestrator | Saturday 28 February 2026 00:47:22 +0000 (0:00:03.139) 0:00:08.003 ***** 2026-02-28 00:48:53.640977 | orchestrator | changed: [testbed-manager] 2026-02-28 00:48:53.640982 | orchestrator | 2026-02-28 00:48:53.640987 | orchestrator | TASK [osism.services.homer : Manage homer service] ***************************** 2026-02-28 00:48:53.640992 | orchestrator | Saturday 28 February 2026 00:47:25 +0000 (0:00:02.684) 0:00:10.688 ***** 2026-02-28 00:48:53.640997 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage homer service (10 retries left). 2026-02-28 00:48:53.641016 | orchestrator | ok: [testbed-manager] 2026-02-28 00:48:53.641021 | orchestrator | 2026-02-28 00:48:53.641046 | orchestrator | RUNNING HANDLER [osism.services.homer : Restart homer service] ***************** 2026-02-28 00:48:53.641051 | orchestrator | Saturday 28 February 2026 00:47:50 +0000 (0:00:25.723) 0:00:36.411 ***** 2026-02-28 00:48:53.641056 | orchestrator | changed: [testbed-manager] 2026-02-28 00:48:53.641061 | orchestrator | 2026-02-28 00:48:53.641066 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-28 00:48:53.641071 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-28 00:48:53.641078 | orchestrator | 2026-02-28 00:48:53.641083 | orchestrator | 2026-02-28 00:48:53.641088 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-28 00:48:53.641093 | orchestrator | Saturday 28 February 2026 00:47:53 +0000 (0:00:03.104) 0:00:39.515 ***** 2026-02-28 00:48:53.641098 | orchestrator | =============================================================================== 2026-02-28 00:48:53.641103 | orchestrator | osism.services.homer : Manage homer service ---------------------------- 25.72s 2026-02-28 00:48:53.641108 | orchestrator | osism.services.homer : Copy config.yml configuration file --------------- 3.14s 2026-02-28 00:48:53.641113 | orchestrator | osism.services.homer : Restart homer service ---------------------------- 3.10s 2026-02-28 00:48:53.641118 | orchestrator | osism.services.homer : Copy docker-compose.yml file --------------------- 2.69s 2026-02-28 00:48:53.641123 | orchestrator | osism.services.homer : Create traefik external network ------------------ 2.37s 2026-02-28 00:48:53.641128 | orchestrator | osism.services.homer : Create required directories ---------------------- 1.57s 2026-02-28 00:48:53.641132 | orchestrator | osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards --- 0.42s 2026-02-28 00:48:53.641137 | orchestrator | 2026-02-28 00:48:53.641142 | orchestrator | 2026-02-28 00:48:53.641147 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2026-02-28 00:48:53.641151 | orchestrator | 2026-02-28 00:48:53.641162 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2026-02-28 00:48:53.641171 | orchestrator | Saturday 28 February 2026 00:47:16 +0000 (0:00:00.472) 0:00:00.472 ***** 2026-02-28 00:48:53.641180 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2026-02-28 00:48:53.641189 | orchestrator | 2026-02-28 00:48:53.641199 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2026-02-28 00:48:53.641206 | orchestrator | Saturday 28 February 2026 00:47:18 +0000 (0:00:01.121) 0:00:01.594 ***** 2026-02-28 00:48:53.641215 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2026-02-28 00:48:53.641222 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2026-02-28 00:48:53.641231 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2026-02-28 00:48:53.641239 | orchestrator | 2026-02-28 00:48:53.641246 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2026-02-28 00:48:53.641253 | orchestrator | Saturday 28 February 2026 00:47:20 +0000 (0:00:02.275) 0:00:03.869 ***** 2026-02-28 00:48:53.641261 | orchestrator | changed: [testbed-manager] 2026-02-28 00:48:53.641269 | orchestrator | 2026-02-28 00:48:53.641277 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2026-02-28 00:48:53.641285 | orchestrator | Saturday 28 February 2026 00:47:24 +0000 (0:00:03.719) 0:00:07.589 ***** 2026-02-28 00:48:53.641305 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2026-02-28 00:48:53.641387 | orchestrator | ok: [testbed-manager] 2026-02-28 00:48:53.641395 | orchestrator | 2026-02-28 00:48:53.641400 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2026-02-28 00:48:53.641405 | orchestrator | Saturday 28 February 2026 00:47:57 +0000 (0:00:33.604) 0:00:41.194 ***** 2026-02-28 00:48:53.641420 | orchestrator | changed: [testbed-manager] 2026-02-28 00:48:53.641428 | orchestrator | 2026-02-28 00:48:53.641441 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2026-02-28 00:48:53.641451 | orchestrator | Saturday 28 February 2026 00:48:01 +0000 (0:00:03.404) 0:00:44.598 ***** 2026-02-28 00:48:53.641460 | orchestrator | ok: [testbed-manager] 2026-02-28 00:48:53.641467 | orchestrator | 2026-02-28 00:48:53.641475 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2026-02-28 00:48:53.641482 | orchestrator | Saturday 28 February 2026 00:48:01 +0000 (0:00:00.760) 0:00:45.359 ***** 2026-02-28 00:48:53.641490 | orchestrator | changed: [testbed-manager] 2026-02-28 00:48:53.641498 | orchestrator | 2026-02-28 00:48:53.641505 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2026-02-28 00:48:53.641513 | orchestrator | Saturday 28 February 2026 00:48:05 +0000 (0:00:03.882) 0:00:49.242 ***** 2026-02-28 00:48:53.641520 | orchestrator | changed: [testbed-manager] 2026-02-28 00:48:53.641528 | orchestrator | 2026-02-28 00:48:53.641536 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2026-02-28 00:48:53.641544 | orchestrator | Saturday 28 February 2026 00:48:07 +0000 (0:00:01.580) 0:00:50.822 ***** 2026-02-28 00:48:53.641551 | orchestrator | changed: [testbed-manager] 2026-02-28 00:48:53.641559 | orchestrator | 2026-02-28 00:48:53.641567 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2026-02-28 00:48:53.641575 | orchestrator | Saturday 28 February 2026 00:48:08 +0000 (0:00:00.729) 0:00:51.552 ***** 2026-02-28 00:48:53.641583 | orchestrator | ok: [testbed-manager] 2026-02-28 00:48:53.641590 | orchestrator | 2026-02-28 00:48:53.641598 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-28 00:48:53.641608 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-28 00:48:53.641618 | orchestrator | 2026-02-28 00:48:53.641624 | orchestrator | 2026-02-28 00:48:53.641631 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-28 00:48:53.641639 | orchestrator | Saturday 28 February 2026 00:48:09 +0000 (0:00:01.095) 0:00:52.648 ***** 2026-02-28 00:48:53.641646 | orchestrator | =============================================================================== 2026-02-28 00:48:53.641654 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 33.60s 2026-02-28 00:48:53.641661 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 3.88s 2026-02-28 00:48:53.641669 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 3.72s 2026-02-28 00:48:53.641677 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 3.40s 2026-02-28 00:48:53.641684 | orchestrator | osism.services.openstackclient : Create required directories ------------ 2.28s 2026-02-28 00:48:53.641691 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 1.58s 2026-02-28 00:48:53.641698 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 1.12s 2026-02-28 00:48:53.641706 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 1.10s 2026-02-28 00:48:53.641713 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 0.76s 2026-02-28 00:48:53.641721 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 0.73s 2026-02-28 00:48:53.641730 | orchestrator | 2026-02-28 00:48:53.641740 | orchestrator | 2026-02-28 00:48:53.641748 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-28 00:48:53.641756 | orchestrator | 2026-02-28 00:48:53.641764 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-28 00:48:53.641772 | orchestrator | Saturday 28 February 2026 00:47:16 +0000 (0:00:00.630) 0:00:00.630 ***** 2026-02-28 00:48:53.641779 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2026-02-28 00:48:53.641787 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2026-02-28 00:48:53.641809 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2026-02-28 00:48:53.641817 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2026-02-28 00:48:53.641825 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2026-02-28 00:48:53.641832 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2026-02-28 00:48:53.641839 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2026-02-28 00:48:53.641847 | orchestrator | 2026-02-28 00:48:53.641855 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2026-02-28 00:48:53.641863 | orchestrator | 2026-02-28 00:48:53.641872 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2026-02-28 00:48:53.641881 | orchestrator | Saturday 28 February 2026 00:47:18 +0000 (0:00:02.043) 0:00:02.673 ***** 2026-02-28 00:48:53.641901 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-28 00:48:53.641911 | orchestrator | 2026-02-28 00:48:53.641919 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2026-02-28 00:48:53.641986 | orchestrator | Saturday 28 February 2026 00:47:20 +0000 (0:00:01.864) 0:00:04.537 ***** 2026-02-28 00:48:53.641994 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:48:53.642001 | orchestrator | ok: [testbed-manager] 2026-02-28 00:48:53.642008 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:48:53.642075 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:48:53.642089 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:48:53.642111 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:48:53.642121 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:48:53.642130 | orchestrator | 2026-02-28 00:48:53.642140 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2026-02-28 00:48:53.642149 | orchestrator | Saturday 28 February 2026 00:47:22 +0000 (0:00:01.975) 0:00:06.513 ***** 2026-02-28 00:48:53.642158 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:48:53.642167 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:48:53.642176 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:48:53.642185 | orchestrator | ok: [testbed-manager] 2026-02-28 00:48:53.642195 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:48:53.642204 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:48:53.642214 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:48:53.642222 | orchestrator | 2026-02-28 00:48:53.642231 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2026-02-28 00:48:53.642240 | orchestrator | Saturday 28 February 2026 00:47:26 +0000 (0:00:03.470) 0:00:09.983 ***** 2026-02-28 00:48:53.642249 | orchestrator | changed: [testbed-manager] 2026-02-28 00:48:53.642259 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:48:53.642268 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:48:53.642277 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:48:53.642287 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:48:53.642296 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:48:53.642305 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:48:53.642313 | orchestrator | 2026-02-28 00:48:53.642321 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2026-02-28 00:48:53.642331 | orchestrator | Saturday 28 February 2026 00:47:28 +0000 (0:00:02.249) 0:00:12.233 ***** 2026-02-28 00:48:53.642340 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:48:53.642349 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:48:53.642358 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:48:53.642366 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:48:53.642374 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:48:53.642382 | orchestrator | changed: [testbed-manager] 2026-02-28 00:48:53.642390 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:48:53.642399 | orchestrator | 2026-02-28 00:48:53.642407 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2026-02-28 00:48:53.642416 | orchestrator | Saturday 28 February 2026 00:47:38 +0000 (0:00:10.560) 0:00:22.794 ***** 2026-02-28 00:48:53.642432 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:48:53.642440 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:48:53.642448 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:48:53.642457 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:48:53.642465 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:48:53.642473 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:48:53.642482 | orchestrator | changed: [testbed-manager] 2026-02-28 00:48:53.642490 | orchestrator | 2026-02-28 00:48:53.642498 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2026-02-28 00:48:53.642507 | orchestrator | Saturday 28 February 2026 00:48:20 +0000 (0:00:41.177) 0:01:03.972 ***** 2026-02-28 00:48:53.642517 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-node-0, testbed-manager, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-28 00:48:53.642527 | orchestrator | 2026-02-28 00:48:53.642536 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2026-02-28 00:48:53.642544 | orchestrator | Saturday 28 February 2026 00:48:21 +0000 (0:00:01.545) 0:01:05.517 ***** 2026-02-28 00:48:53.642553 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2026-02-28 00:48:53.642561 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2026-02-28 00:48:53.642569 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2026-02-28 00:48:53.642577 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2026-02-28 00:48:53.642585 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2026-02-28 00:48:53.642593 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2026-02-28 00:48:53.642601 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2026-02-28 00:48:53.642609 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2026-02-28 00:48:53.642617 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2026-02-28 00:48:53.642625 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2026-02-28 00:48:53.642633 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2026-02-28 00:48:53.642642 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2026-02-28 00:48:53.642649 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2026-02-28 00:48:53.642657 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2026-02-28 00:48:53.642665 | orchestrator | 2026-02-28 00:48:53.642674 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2026-02-28 00:48:53.642683 | orchestrator | Saturday 28 February 2026 00:48:28 +0000 (0:00:06.736) 0:01:12.253 ***** 2026-02-28 00:48:53.642691 | orchestrator | ok: [testbed-manager] 2026-02-28 00:48:53.642699 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:48:53.642707 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:48:53.642716 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:48:53.642724 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:48:53.642732 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:48:53.642740 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:48:53.642747 | orchestrator | 2026-02-28 00:48:53.642755 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2026-02-28 00:48:53.642763 | orchestrator | Saturday 28 February 2026 00:48:29 +0000 (0:00:01.431) 0:01:13.685 ***** 2026-02-28 00:48:53.642771 | orchestrator | changed: [testbed-manager] 2026-02-28 00:48:53.642779 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:48:53.642787 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:48:53.642794 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:48:53.642803 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:48:53.642810 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:48:53.642818 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:48:53.642826 | orchestrator | 2026-02-28 00:48:53.642835 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2026-02-28 00:48:53.642856 | orchestrator | Saturday 28 February 2026 00:48:32 +0000 (0:00:02.208) 0:01:15.893 ***** 2026-02-28 00:48:53.642864 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:48:53.642873 | orchestrator | ok: [testbed-manager] 2026-02-28 00:48:53.642881 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:48:53.642889 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:48:53.642898 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:48:53.642907 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:48:53.642915 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:48:53.642942 | orchestrator | 2026-02-28 00:48:53.642950 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2026-02-28 00:48:53.642993 | orchestrator | Saturday 28 February 2026 00:48:33 +0000 (0:00:01.736) 0:01:17.630 ***** 2026-02-28 00:48:53.643002 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:48:53.643009 | orchestrator | ok: [testbed-manager] 2026-02-28 00:48:53.643017 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:48:53.643024 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:48:53.643032 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:48:53.643040 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:48:53.643048 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:48:53.643055 | orchestrator | 2026-02-28 00:48:53.643063 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2026-02-28 00:48:53.643070 | orchestrator | Saturday 28 February 2026 00:48:36 +0000 (0:00:02.475) 0:01:20.105 ***** 2026-02-28 00:48:53.643078 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2026-02-28 00:48:53.643088 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-28 00:48:53.643096 | orchestrator | 2026-02-28 00:48:53.643104 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2026-02-28 00:48:53.643112 | orchestrator | Saturday 28 February 2026 00:48:38 +0000 (0:00:01.971) 0:01:22.076 ***** 2026-02-28 00:48:53.643120 | orchestrator | changed: [testbed-manager] 2026-02-28 00:48:53.643128 | orchestrator | 2026-02-28 00:48:53.643136 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2026-02-28 00:48:53.643144 | orchestrator | Saturday 28 February 2026 00:48:40 +0000 (0:00:02.281) 0:01:24.357 ***** 2026-02-28 00:48:53.643151 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:48:53.643159 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:48:53.643166 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:48:53.643174 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:48:53.643182 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:48:53.643190 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:48:53.643199 | orchestrator | changed: [testbed-manager] 2026-02-28 00:48:53.643206 | orchestrator | 2026-02-28 00:48:53.643213 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-28 00:48:53.643220 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-28 00:48:53.643228 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-28 00:48:53.643237 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-28 00:48:53.643245 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-28 00:48:53.643254 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-28 00:48:53.643262 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-28 00:48:53.643276 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-28 00:48:53.643284 | orchestrator | 2026-02-28 00:48:53.643292 | orchestrator | 2026-02-28 00:48:53.643308 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-28 00:48:53.643316 | orchestrator | Saturday 28 February 2026 00:48:51 +0000 (0:00:11.343) 0:01:35.700 ***** 2026-02-28 00:48:53.643324 | orchestrator | =============================================================================== 2026-02-28 00:48:53.643332 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 41.18s 2026-02-28 00:48:53.643340 | orchestrator | osism.services.netdata : Restart service netdata ----------------------- 11.34s 2026-02-28 00:48:53.643347 | orchestrator | osism.services.netdata : Add repository -------------------------------- 10.56s 2026-02-28 00:48:53.643355 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 6.74s 2026-02-28 00:48:53.643362 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 3.47s 2026-02-28 00:48:53.643370 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 2.48s 2026-02-28 00:48:53.643377 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 2.28s 2026-02-28 00:48:53.643385 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 2.25s 2026-02-28 00:48:53.643393 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 2.21s 2026-02-28 00:48:53.643401 | orchestrator | Group hosts based on enabled services ----------------------------------- 2.04s 2026-02-28 00:48:53.643536 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 1.98s 2026-02-28 00:48:53.643555 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 1.97s 2026-02-28 00:48:53.643564 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 1.86s 2026-02-28 00:48:53.643573 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 1.74s 2026-02-28 00:48:53.643581 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 1.55s 2026-02-28 00:48:53.643589 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 1.43s 2026-02-28 00:48:53.643598 | orchestrator | 2026-02-28 00:48:53 | INFO  | Task 4d8376f3-a2b7-449b-8353-fdea1456ef19 is in state STARTED 2026-02-28 00:48:53.643608 | orchestrator | 2026-02-28 00:48:53 | INFO  | Task 399ced28-2a01-42c9-b976-1444b3ccde84 is in state SUCCESS 2026-02-28 00:48:53.643615 | orchestrator | 2026-02-28 00:48:53 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:48:56.683805 | orchestrator | 2026-02-28 00:48:56 | INFO  | Task f2d296b6-0e33-4574-85a0-ce15e7bf129b is in state STARTED 2026-02-28 00:48:56.684695 | orchestrator | 2026-02-28 00:48:56 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:48:56.686778 | orchestrator | 2026-02-28 00:48:56 | INFO  | Task 4d8376f3-a2b7-449b-8353-fdea1456ef19 is in state STARTED 2026-02-28 00:48:56.686837 | orchestrator | 2026-02-28 00:48:56 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:48:59.731330 | orchestrator | 2026-02-28 00:48:59 | INFO  | Task f2d296b6-0e33-4574-85a0-ce15e7bf129b is in state STARTED 2026-02-28 00:48:59.734169 | orchestrator | 2026-02-28 00:48:59 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:48:59.735524 | orchestrator | 2026-02-28 00:48:59 | INFO  | Task 4d8376f3-a2b7-449b-8353-fdea1456ef19 is in state STARTED 2026-02-28 00:48:59.735575 | orchestrator | 2026-02-28 00:48:59 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:49:02.806723 | orchestrator | 2026-02-28 00:49:02 | INFO  | Task f2d296b6-0e33-4574-85a0-ce15e7bf129b is in state STARTED 2026-02-28 00:49:02.809946 | orchestrator | 2026-02-28 00:49:02 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:49:02.810203 | orchestrator | 2026-02-28 00:49:02 | INFO  | Task 4d8376f3-a2b7-449b-8353-fdea1456ef19 is in state STARTED 2026-02-28 00:49:02.810234 | orchestrator | 2026-02-28 00:49:02 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:49:05.860002 | orchestrator | 2026-02-28 00:49:05 | INFO  | Task f2d296b6-0e33-4574-85a0-ce15e7bf129b is in state STARTED 2026-02-28 00:49:05.874277 | orchestrator | 2026-02-28 00:49:05 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:49:05.874364 | orchestrator | 2026-02-28 00:49:05 | INFO  | Task 4d8376f3-a2b7-449b-8353-fdea1456ef19 is in state STARTED 2026-02-28 00:49:05.874375 | orchestrator | 2026-02-28 00:49:05 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:49:08.941439 | orchestrator | 2026-02-28 00:49:08 | INFO  | Task f2d296b6-0e33-4574-85a0-ce15e7bf129b is in state STARTED 2026-02-28 00:49:08.945703 | orchestrator | 2026-02-28 00:49:08 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:49:08.950756 | orchestrator | 2026-02-28 00:49:08 | INFO  | Task 4d8376f3-a2b7-449b-8353-fdea1456ef19 is in state STARTED 2026-02-28 00:49:08.953326 | orchestrator | 2026-02-28 00:49:08 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:49:12.012290 | orchestrator | 2026-02-28 00:49:12 | INFO  | Task f2d296b6-0e33-4574-85a0-ce15e7bf129b is in state STARTED 2026-02-28 00:49:12.013922 | orchestrator | 2026-02-28 00:49:12 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:49:12.015109 | orchestrator | 2026-02-28 00:49:12 | INFO  | Task 4d8376f3-a2b7-449b-8353-fdea1456ef19 is in state STARTED 2026-02-28 00:49:12.015149 | orchestrator | 2026-02-28 00:49:12 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:49:15.089718 | orchestrator | 2026-02-28 00:49:15 | INFO  | Task f2d296b6-0e33-4574-85a0-ce15e7bf129b is in state STARTED 2026-02-28 00:49:15.091401 | orchestrator | 2026-02-28 00:49:15 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:49:15.092800 | orchestrator | 2026-02-28 00:49:15 | INFO  | Task 4d8376f3-a2b7-449b-8353-fdea1456ef19 is in state STARTED 2026-02-28 00:49:15.093111 | orchestrator | 2026-02-28 00:49:15 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:49:18.163413 | orchestrator | 2026-02-28 00:49:18 | INFO  | Task f2d296b6-0e33-4574-85a0-ce15e7bf129b is in state STARTED 2026-02-28 00:49:18.164086 | orchestrator | 2026-02-28 00:49:18 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:49:18.167626 | orchestrator | 2026-02-28 00:49:18 | INFO  | Task 4d8376f3-a2b7-449b-8353-fdea1456ef19 is in state STARTED 2026-02-28 00:49:18.167683 | orchestrator | 2026-02-28 00:49:18 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:49:21.215652 | orchestrator | 2026-02-28 00:49:21 | INFO  | Task f2d296b6-0e33-4574-85a0-ce15e7bf129b is in state STARTED 2026-02-28 00:49:21.216804 | orchestrator | 2026-02-28 00:49:21 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:49:21.217951 | orchestrator | 2026-02-28 00:49:21 | INFO  | Task 4d8376f3-a2b7-449b-8353-fdea1456ef19 is in state STARTED 2026-02-28 00:49:21.218080 | orchestrator | 2026-02-28 00:49:21 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:49:24.259093 | orchestrator | 2026-02-28 00:49:24 | INFO  | Task f2d296b6-0e33-4574-85a0-ce15e7bf129b is in state STARTED 2026-02-28 00:49:24.259369 | orchestrator | 2026-02-28 00:49:24 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:49:24.259906 | orchestrator | 2026-02-28 00:49:24 | INFO  | Task 4d8376f3-a2b7-449b-8353-fdea1456ef19 is in state STARTED 2026-02-28 00:49:24.259918 | orchestrator | 2026-02-28 00:49:24 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:49:27.287660 | orchestrator | 2026-02-28 00:49:27 | INFO  | Task f2d296b6-0e33-4574-85a0-ce15e7bf129b is in state STARTED 2026-02-28 00:49:27.288047 | orchestrator | 2026-02-28 00:49:27 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:49:27.289225 | orchestrator | 2026-02-28 00:49:27 | INFO  | Task 4d8376f3-a2b7-449b-8353-fdea1456ef19 is in state STARTED 2026-02-28 00:49:27.289257 | orchestrator | 2026-02-28 00:49:27 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:49:30.319680 | orchestrator | 2026-02-28 00:49:30 | INFO  | Task f2d296b6-0e33-4574-85a0-ce15e7bf129b is in state STARTED 2026-02-28 00:49:30.320888 | orchestrator | 2026-02-28 00:49:30 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:49:30.323418 | orchestrator | 2026-02-28 00:49:30 | INFO  | Task 4d8376f3-a2b7-449b-8353-fdea1456ef19 is in state STARTED 2026-02-28 00:49:30.323448 | orchestrator | 2026-02-28 00:49:30 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:49:33.357880 | orchestrator | 2026-02-28 00:49:33 | INFO  | Task f2d296b6-0e33-4574-85a0-ce15e7bf129b is in state STARTED 2026-02-28 00:49:33.358516 | orchestrator | 2026-02-28 00:49:33 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:49:33.360194 | orchestrator | 2026-02-28 00:49:33 | INFO  | Task 4d8376f3-a2b7-449b-8353-fdea1456ef19 is in state STARTED 2026-02-28 00:49:33.360215 | orchestrator | 2026-02-28 00:49:33 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:49:36.391208 | orchestrator | 2026-02-28 00:49:36 | INFO  | Task f2d296b6-0e33-4574-85a0-ce15e7bf129b is in state STARTED 2026-02-28 00:49:36.391324 | orchestrator | 2026-02-28 00:49:36 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:49:36.391400 | orchestrator | 2026-02-28 00:49:36 | INFO  | Task 4d8376f3-a2b7-449b-8353-fdea1456ef19 is in state STARTED 2026-02-28 00:49:36.391444 | orchestrator | 2026-02-28 00:49:36 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:49:39.423876 | orchestrator | 2026-02-28 00:49:39 | INFO  | Task f2d296b6-0e33-4574-85a0-ce15e7bf129b is in state STARTED 2026-02-28 00:49:39.427202 | orchestrator | 2026-02-28 00:49:39 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:49:39.428765 | orchestrator | 2026-02-28 00:49:39 | INFO  | Task 4d8376f3-a2b7-449b-8353-fdea1456ef19 is in state STARTED 2026-02-28 00:49:39.428852 | orchestrator | 2026-02-28 00:49:39 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:49:42.469265 | orchestrator | 2026-02-28 00:49:42 | INFO  | Task f2d296b6-0e33-4574-85a0-ce15e7bf129b is in state STARTED 2026-02-28 00:49:42.473176 | orchestrator | 2026-02-28 00:49:42 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:49:42.475697 | orchestrator | 2026-02-28 00:49:42 | INFO  | Task 4d8376f3-a2b7-449b-8353-fdea1456ef19 is in state STARTED 2026-02-28 00:49:42.475767 | orchestrator | 2026-02-28 00:49:42 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:49:45.524428 | orchestrator | 2026-02-28 00:49:45 | INFO  | Task f2d296b6-0e33-4574-85a0-ce15e7bf129b is in state STARTED 2026-02-28 00:49:45.525355 | orchestrator | 2026-02-28 00:49:45 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:49:45.527244 | orchestrator | 2026-02-28 00:49:45 | INFO  | Task 4d8376f3-a2b7-449b-8353-fdea1456ef19 is in state STARTED 2026-02-28 00:49:45.527318 | orchestrator | 2026-02-28 00:49:45 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:49:48.571650 | orchestrator | 2026-02-28 00:49:48 | INFO  | Task f2d296b6-0e33-4574-85a0-ce15e7bf129b is in state STARTED 2026-02-28 00:49:48.573831 | orchestrator | 2026-02-28 00:49:48 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:49:48.576132 | orchestrator | 2026-02-28 00:49:48 | INFO  | Task 4d8376f3-a2b7-449b-8353-fdea1456ef19 is in state STARTED 2026-02-28 00:49:48.576200 | orchestrator | 2026-02-28 00:49:48 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:49:51.629355 | orchestrator | 2026-02-28 00:49:51 | INFO  | Task f2d296b6-0e33-4574-85a0-ce15e7bf129b is in state STARTED 2026-02-28 00:49:51.630745 | orchestrator | 2026-02-28 00:49:51 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:49:51.632922 | orchestrator | 2026-02-28 00:49:51 | INFO  | Task 4d8376f3-a2b7-449b-8353-fdea1456ef19 is in state STARTED 2026-02-28 00:49:51.633752 | orchestrator | 2026-02-28 00:49:51 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:49:54.674334 | orchestrator | 2026-02-28 00:49:54 | INFO  | Task f2d296b6-0e33-4574-85a0-ce15e7bf129b is in state STARTED 2026-02-28 00:49:54.674462 | orchestrator | 2026-02-28 00:49:54 | INFO  | Task b2f0adc8-00a3-44c1-83b2-220a2dc95ee8 is in state STARTED 2026-02-28 00:49:54.674772 | orchestrator | 2026-02-28 00:49:54 | INFO  | Task 9a7311e2-a5c2-4948-806c-9b5dd47efc42 is in state STARTED 2026-02-28 00:49:54.675524 | orchestrator | 2026-02-28 00:49:54 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:49:54.675867 | orchestrator | 2026-02-28 00:49:54 | INFO  | Task 9360a8a3-309e-413e-b1d7-52e10f7ee700 is in state STARTED 2026-02-28 00:49:54.679787 | orchestrator | 2026-02-28 00:49:54 | INFO  | Task 4d8376f3-a2b7-449b-8353-fdea1456ef19 is in state SUCCESS 2026-02-28 00:49:54.682368 | orchestrator | 2026-02-28 00:49:54.682499 | orchestrator | 2026-02-28 00:49:54.682529 | orchestrator | PLAY [Apply role phpmyadmin] *************************************************** 2026-02-28 00:49:54.682552 | orchestrator | 2026-02-28 00:49:54.682573 | orchestrator | TASK [osism.services.phpmyadmin : Create traefik external network] ************* 2026-02-28 00:49:54.682593 | orchestrator | Saturday 28 February 2026 00:47:38 +0000 (0:00:00.286) 0:00:00.286 ***** 2026-02-28 00:49:54.682613 | orchestrator | ok: [testbed-manager] 2026-02-28 00:49:54.682634 | orchestrator | 2026-02-28 00:49:54.682654 | orchestrator | TASK [osism.services.phpmyadmin : Create required directories] ***************** 2026-02-28 00:49:54.682674 | orchestrator | Saturday 28 February 2026 00:47:39 +0000 (0:00:01.133) 0:00:01.419 ***** 2026-02-28 00:49:54.682694 | orchestrator | changed: [testbed-manager] => (item=/opt/phpmyadmin) 2026-02-28 00:49:54.682706 | orchestrator | 2026-02-28 00:49:54.682717 | orchestrator | TASK [osism.services.phpmyadmin : Copy docker-compose.yml file] **************** 2026-02-28 00:49:54.682728 | orchestrator | Saturday 28 February 2026 00:47:41 +0000 (0:00:01.765) 0:00:03.184 ***** 2026-02-28 00:49:54.682739 | orchestrator | changed: [testbed-manager] 2026-02-28 00:49:54.682751 | orchestrator | 2026-02-28 00:49:54.682771 | orchestrator | TASK [osism.services.phpmyadmin : Manage phpmyadmin service] ******************* 2026-02-28 00:49:54.682788 | orchestrator | Saturday 28 February 2026 00:47:43 +0000 (0:00:02.608) 0:00:05.793 ***** 2026-02-28 00:49:54.682806 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage phpmyadmin service (10 retries left). 2026-02-28 00:49:54.682826 | orchestrator | ok: [testbed-manager] 2026-02-28 00:49:54.682846 | orchestrator | 2026-02-28 00:49:54.682865 | orchestrator | RUNNING HANDLER [osism.services.phpmyadmin : Restart phpmyadmin service] ******* 2026-02-28 00:49:54.682908 | orchestrator | Saturday 28 February 2026 00:48:41 +0000 (0:00:58.021) 0:01:03.815 ***** 2026-02-28 00:49:54.682928 | orchestrator | changed: [testbed-manager] 2026-02-28 00:49:54.682978 | orchestrator | 2026-02-28 00:49:54.682999 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-28 00:49:54.683078 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-28 00:49:54.683103 | orchestrator | 2026-02-28 00:49:54.683123 | orchestrator | 2026-02-28 00:49:54.683141 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-28 00:49:54.683159 | orchestrator | Saturday 28 February 2026 00:48:52 +0000 (0:00:11.289) 0:01:15.104 ***** 2026-02-28 00:49:54.683177 | orchestrator | =============================================================================== 2026-02-28 00:49:54.683196 | orchestrator | osism.services.phpmyadmin : Manage phpmyadmin service ------------------ 58.02s 2026-02-28 00:49:54.683215 | orchestrator | osism.services.phpmyadmin : Restart phpmyadmin service ----------------- 11.29s 2026-02-28 00:49:54.683235 | orchestrator | osism.services.phpmyadmin : Copy docker-compose.yml file ---------------- 2.61s 2026-02-28 00:49:54.683251 | orchestrator | osism.services.phpmyadmin : Create required directories ----------------- 1.77s 2026-02-28 00:49:54.683263 | orchestrator | osism.services.phpmyadmin : Create traefik external network ------------- 1.13s 2026-02-28 00:49:54.683273 | orchestrator | 2026-02-28 00:49:54.683284 | orchestrator | 2026-02-28 00:49:54.683295 | orchestrator | PLAY [Apply role common] ******************************************************* 2026-02-28 00:49:54.683306 | orchestrator | 2026-02-28 00:49:54.683317 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-02-28 00:49:54.683327 | orchestrator | Saturday 28 February 2026 00:47:07 +0000 (0:00:00.262) 0:00:00.262 ***** 2026-02-28 00:49:54.683339 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-28 00:49:54.683351 | orchestrator | 2026-02-28 00:49:54.683362 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2026-02-28 00:49:54.683372 | orchestrator | Saturday 28 February 2026 00:47:09 +0000 (0:00:01.407) 0:00:01.670 ***** 2026-02-28 00:49:54.683383 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-28 00:49:54.683394 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-28 00:49:54.683405 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-28 00:49:54.683415 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-28 00:49:54.683426 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-28 00:49:54.683437 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-28 00:49:54.683448 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-28 00:49:54.683459 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-28 00:49:54.683469 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-28 00:49:54.683480 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-28 00:49:54.683492 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-28 00:49:54.683503 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-28 00:49:54.683513 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-28 00:49:54.683524 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-28 00:49:54.683535 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-28 00:49:54.683546 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-28 00:49:54.683574 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-28 00:49:54.683597 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-28 00:49:54.683608 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-28 00:49:54.683619 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-28 00:49:54.683630 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-28 00:49:54.683641 | orchestrator | 2026-02-28 00:49:54.683652 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-02-28 00:49:54.683662 | orchestrator | Saturday 28 February 2026 00:47:13 +0000 (0:00:04.261) 0:00:05.932 ***** 2026-02-28 00:49:54.683673 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-28 00:49:54.683686 | orchestrator | 2026-02-28 00:49:54.683696 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2026-02-28 00:49:54.683707 | orchestrator | Saturday 28 February 2026 00:47:14 +0000 (0:00:01.423) 0:00:07.355 ***** 2026-02-28 00:49:54.683733 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-28 00:49:54.683750 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-28 00:49:54.683763 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-28 00:49:54.683774 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-28 00:49:54.683785 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-28 00:49:54.683799 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-28 00:49:54.683864 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-28 00:49:54.683898 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:49:54.683922 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:49:54.683943 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:49:54.683964 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:49:54.683985 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:49:54.684061 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:49:54.684079 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:49:54.684100 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:49:54.684140 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:49:54.684171 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:49:54.684192 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:49:54.684211 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:49:54.684231 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:49:54.684262 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:49:54.684282 | orchestrator | 2026-02-28 00:49:54.684296 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2026-02-28 00:49:54.684307 | orchestrator | Saturday 28 February 2026 00:47:19 +0000 (0:00:04.612) 0:00:11.967 ***** 2026-02-28 00:49:54.684348 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-28 00:49:54.684362 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-28 00:49:54.684379 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:49:54.684391 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:49:54.684403 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:49:54.684414 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:49:54.684426 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:49:54.684438 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-28 00:49:54.684469 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-28 00:49:54.684489 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:49:54.684540 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:49:54.684562 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:49:54.684583 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-28 00:49:54.684595 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:49:54.684607 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-28 00:49:54.684618 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:49:54.684629 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:49:54.684649 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:49:54.684661 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:49:54.684672 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:49:54.684683 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:49:54.684713 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-28 00:49:54.684731 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:49:54.684744 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:49:54.684756 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:49:54.684767 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:49:54.684778 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:49:54.684804 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:49:54.684815 | orchestrator | 2026-02-28 00:49:54.684826 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2026-02-28 00:49:54.684837 | orchestrator | Saturday 28 February 2026 00:47:22 +0000 (0:00:03.013) 0:00:14.981 ***** 2026-02-28 00:49:54.684849 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-28 00:49:54.684860 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:49:54.684898 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-28 00:49:54.684910 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:49:54.684927 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:49:54.684940 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:49:54.684951 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:49:54.684962 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-28 00:49:54.684981 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:49:54.685003 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:49:54.685060 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-28 00:49:54.685081 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:49:54.685126 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:49:54.685148 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-28 00:49:54.685169 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:49:54.685197 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:49:54.685216 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:49:54.685228 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:49:54.685248 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-28 00:49:54.685260 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:49:54.685272 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:49:54.685284 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:49:54.685310 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-28 00:49:54.685322 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:49:54.685334 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:49:54.685350 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:49:54.685364 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:49:54.685394 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:49:54.685413 | orchestrator | 2026-02-28 00:49:54.685431 | orchestrator | TASK [common : Ensure /var/log/journal exists on EL10 systems] ***************** 2026-02-28 00:49:54.685450 | orchestrator | Saturday 28 February 2026 00:47:28 +0000 (0:00:06.364) 0:00:21.345 ***** 2026-02-28 00:49:54.685470 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:49:54.685490 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:49:54.685509 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:49:54.685528 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:49:54.685547 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:49:54.685566 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:49:54.685585 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:49:54.685605 | orchestrator | 2026-02-28 00:49:54.685617 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2026-02-28 00:49:54.685628 | orchestrator | Saturday 28 February 2026 00:47:30 +0000 (0:00:01.644) 0:00:22.990 ***** 2026-02-28 00:49:54.685639 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:49:54.685651 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:49:54.685661 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:49:54.685672 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:49:54.685683 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:49:54.685694 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:49:54.685705 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:49:54.685716 | orchestrator | 2026-02-28 00:49:54.685727 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2026-02-28 00:49:54.685738 | orchestrator | Saturday 28 February 2026 00:47:32 +0000 (0:00:02.033) 0:00:25.023 ***** 2026-02-28 00:49:54.685749 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:49:54.685760 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:49:54.685771 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:49:54.685782 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:49:54.685793 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:49:54.685804 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:49:54.685815 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:49:54.685826 | orchestrator | 2026-02-28 00:49:54.685837 | orchestrator | TASK [common : Copying over kolla.target] ************************************** 2026-02-28 00:49:54.685849 | orchestrator | Saturday 28 February 2026 00:47:34 +0000 (0:00:01.715) 0:00:26.738 ***** 2026-02-28 00:49:54.685860 | orchestrator | changed: [testbed-manager] 2026-02-28 00:49:54.685870 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:49:54.685881 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:49:54.685892 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:49:54.685903 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:49:54.685914 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:49:54.685925 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:49:54.685936 | orchestrator | 2026-02-28 00:49:54.685947 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2026-02-28 00:49:54.685958 | orchestrator | Saturday 28 February 2026 00:47:38 +0000 (0:00:04.416) 0:00:31.155 ***** 2026-02-28 00:49:54.685983 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-28 00:49:54.685996 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-28 00:49:54.686159 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-28 00:49:54.686182 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-28 00:49:54.686199 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-28 00:49:54.686219 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-28 00:49:54.686239 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:49:54.686258 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-28 00:49:54.686304 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:49:54.686350 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:49:54.686381 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:49:54.686404 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:49:54.686416 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:49:54.686428 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:49:54.686441 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:49:54.686461 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:49:54.686482 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:49:54.686494 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:49:54.686511 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:49:54.686523 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:49:54.686535 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:49:54.686546 | orchestrator | 2026-02-28 00:49:54.686557 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2026-02-28 00:49:54.686570 | orchestrator | Saturday 28 February 2026 00:47:45 +0000 (0:00:07.249) 0:00:38.404 ***** 2026-02-28 00:49:54.686581 | orchestrator | [WARNING]: Skipped 2026-02-28 00:49:54.686594 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2026-02-28 00:49:54.686605 | orchestrator | to this access issue: 2026-02-28 00:49:54.686617 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2026-02-28 00:49:54.686629 | orchestrator | directory 2026-02-28 00:49:54.686639 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-28 00:49:54.686649 | orchestrator | 2026-02-28 00:49:54.686659 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2026-02-28 00:49:54.686669 | orchestrator | Saturday 28 February 2026 00:47:47 +0000 (0:00:01.633) 0:00:40.038 ***** 2026-02-28 00:49:54.686679 | orchestrator | [WARNING]: Skipped 2026-02-28 00:49:54.686689 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2026-02-28 00:49:54.686699 | orchestrator | to this access issue: 2026-02-28 00:49:54.686709 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2026-02-28 00:49:54.686719 | orchestrator | directory 2026-02-28 00:49:54.686730 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-28 00:49:54.686747 | orchestrator | 2026-02-28 00:49:54.686757 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2026-02-28 00:49:54.686767 | orchestrator | Saturday 28 February 2026 00:47:49 +0000 (0:00:02.029) 0:00:42.067 ***** 2026-02-28 00:49:54.686777 | orchestrator | [WARNING]: Skipped 2026-02-28 00:49:54.686787 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2026-02-28 00:49:54.686797 | orchestrator | to this access issue: 2026-02-28 00:49:54.686807 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2026-02-28 00:49:54.686817 | orchestrator | directory 2026-02-28 00:49:54.686828 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-28 00:49:54.686838 | orchestrator | 2026-02-28 00:49:54.686848 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2026-02-28 00:49:54.686858 | orchestrator | Saturday 28 February 2026 00:47:51 +0000 (0:00:02.264) 0:00:44.332 ***** 2026-02-28 00:49:54.686868 | orchestrator | [WARNING]: Skipped 2026-02-28 00:49:54.686878 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2026-02-28 00:49:54.686888 | orchestrator | to this access issue: 2026-02-28 00:49:54.686898 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2026-02-28 00:49:54.686908 | orchestrator | directory 2026-02-28 00:49:54.686919 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-28 00:49:54.686928 | orchestrator | 2026-02-28 00:49:54.686945 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2026-02-28 00:49:54.686956 | orchestrator | Saturday 28 February 2026 00:47:53 +0000 (0:00:01.506) 0:00:45.838 ***** 2026-02-28 00:49:54.686965 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:49:54.686975 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:49:54.686985 | orchestrator | changed: [testbed-manager] 2026-02-28 00:49:54.686995 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:49:54.687005 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:49:54.687037 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:49:54.687049 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:49:54.687059 | orchestrator | 2026-02-28 00:49:54.687069 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2026-02-28 00:49:54.687079 | orchestrator | Saturday 28 February 2026 00:48:02 +0000 (0:00:09.518) 0:00:55.357 ***** 2026-02-28 00:49:54.687089 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-28 00:49:54.687100 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-28 00:49:54.687110 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-28 00:49:54.687121 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-28 00:49:54.687131 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-28 00:49:54.687146 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-28 00:49:54.687156 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-28 00:49:54.687166 | orchestrator | 2026-02-28 00:49:54.687176 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2026-02-28 00:49:54.687187 | orchestrator | Saturday 28 February 2026 00:48:07 +0000 (0:00:04.982) 0:01:00.340 ***** 2026-02-28 00:49:54.687197 | orchestrator | changed: [testbed-manager] 2026-02-28 00:49:54.687214 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:49:54.687231 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:49:54.687248 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:49:54.687263 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:49:54.687280 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:49:54.687298 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:49:54.687331 | orchestrator | 2026-02-28 00:49:54.687349 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2026-02-28 00:49:54.687366 | orchestrator | Saturday 28 February 2026 00:48:11 +0000 (0:00:04.079) 0:01:04.420 ***** 2026-02-28 00:49:54.687382 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-28 00:49:54.687394 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:49:54.687406 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-28 00:49:54.687416 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:49:54.687440 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:49:54.687458 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-28 00:49:54.687485 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:49:54.687514 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-28 00:49:54.687532 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:49:54.687548 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:49:54.687559 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-28 00:49:54.687578 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-28 00:49:54.687590 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:49:54.687604 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:49:54.687621 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-28 00:49:54.687632 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:49:54.687642 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:49:54.687653 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:49:54.687664 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:49:54.687675 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:49:54.687692 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:49:54.687703 | orchestrator | 2026-02-28 00:49:54.687713 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2026-02-28 00:49:54.687723 | orchestrator | Saturday 28 February 2026 00:48:14 +0000 (0:00:02.318) 0:01:06.739 ***** 2026-02-28 00:49:54.687733 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-28 00:49:54.687743 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-28 00:49:54.687753 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-28 00:49:54.687769 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-28 00:49:54.687779 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-28 00:49:54.687795 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-28 00:49:54.687805 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-28 00:49:54.687815 | orchestrator | 2026-02-28 00:49:54.687825 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2026-02-28 00:49:54.687835 | orchestrator | Saturday 28 February 2026 00:48:18 +0000 (0:00:04.844) 0:01:11.584 ***** 2026-02-28 00:49:54.687845 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-28 00:49:54.687855 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-28 00:49:54.687865 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-28 00:49:54.687875 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-28 00:49:54.687885 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-28 00:49:54.687895 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-28 00:49:54.687905 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-28 00:49:54.687915 | orchestrator | 2026-02-28 00:49:54.687925 | orchestrator | TASK [service-check-containers : common | Check containers] ******************** 2026-02-28 00:49:54.687935 | orchestrator | Saturday 28 February 2026 00:48:22 +0000 (0:00:03.201) 0:01:14.785 ***** 2026-02-28 00:49:54.687945 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-28 00:49:54.687956 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-28 00:49:54.687966 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-28 00:49:54.687988 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-28 00:49:54.688005 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-28 00:49:54.688048 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:49:54.688066 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:49:54.688076 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:49:54.688087 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:49:54.688097 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-28 00:49:54.688114 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:49:54.688136 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:49:54.688147 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:49:54.688162 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:49:54.688173 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-28 00:49:54.688183 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:49:54.688193 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:49:54.688204 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:49:54.688215 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:49:54.688246 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:49:54.688265 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:49:54.688282 | orchestrator | 2026-02-28 00:49:54.688300 | orchestrator | TASK [service-check-containers : common | Notify handlers to restart containers] *** 2026-02-28 00:49:54.688319 | orchestrator | Saturday 28 February 2026 00:48:27 +0000 (0:00:05.382) 0:01:20.168 ***** 2026-02-28 00:49:54.688336 | orchestrator | changed: [testbed-manager] => { 2026-02-28 00:49:54.688353 | orchestrator |  "msg": "Notifying handlers" 2026-02-28 00:49:54.688369 | orchestrator | } 2026-02-28 00:49:54.688387 | orchestrator | changed: [testbed-node-0] => { 2026-02-28 00:49:54.688404 | orchestrator |  "msg": "Notifying handlers" 2026-02-28 00:49:54.688419 | orchestrator | } 2026-02-28 00:49:54.688437 | orchestrator | changed: [testbed-node-1] => { 2026-02-28 00:49:54.688453 | orchestrator |  "msg": "Notifying handlers" 2026-02-28 00:49:54.688470 | orchestrator | } 2026-02-28 00:49:54.688488 | orchestrator | changed: [testbed-node-2] => { 2026-02-28 00:49:54.688504 | orchestrator |  "msg": "Notifying handlers" 2026-02-28 00:49:54.688519 | orchestrator | } 2026-02-28 00:49:54.688537 | orchestrator | changed: [testbed-node-3] => { 2026-02-28 00:49:54.688560 | orchestrator |  "msg": "Notifying handlers" 2026-02-28 00:49:54.688578 | orchestrator | } 2026-02-28 00:49:54.688594 | orchestrator | changed: [testbed-node-4] => { 2026-02-28 00:49:54.688609 | orchestrator |  "msg": "Notifying handlers" 2026-02-28 00:49:54.688625 | orchestrator | } 2026-02-28 00:49:54.688641 | orchestrator | changed: [testbed-node-5] => { 2026-02-28 00:49:54.688655 | orchestrator |  "msg": "Notifying handlers" 2026-02-28 00:49:54.688669 | orchestrator | } 2026-02-28 00:49:54.688685 | orchestrator | 2026-02-28 00:49:54.688701 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-28 00:49:54.688717 | orchestrator | Saturday 28 February 2026 00:48:28 +0000 (0:00:01.123) 0:01:21.291 ***** 2026-02-28 00:49:54.688736 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-28 00:49:54.688753 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:49:54.688784 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:49:54.688817 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:49:54.688834 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-28 00:49:54.688858 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:49:54.688870 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:49:54.688886 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-28 00:49:54.688897 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:49:54.688907 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:49:54.688917 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:49:54.688928 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-28 00:49:54.688950 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:49:54.688960 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:49:54.688971 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:49:54.688987 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-28 00:49:54.689004 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:49:54.689080 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:49:54.689097 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-28 00:49:54.689108 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:49:54.689118 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:49:54.689128 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:49:54.689146 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:49:54.689156 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:49:54.689166 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-28 00:49:54.689184 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:49:54.689195 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:49:54.689204 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:49:54.689214 | orchestrator | 2026-02-28 00:49:54.689225 | orchestrator | TASK [common : Creating log volume] ******************************************** 2026-02-28 00:49:54.689235 | orchestrator | Saturday 28 February 2026 00:48:31 +0000 (0:00:02.627) 0:01:23.919 ***** 2026-02-28 00:49:54.689245 | orchestrator | changed: [testbed-manager] 2026-02-28 00:49:54.689254 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:49:54.689264 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:49:54.689274 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:49:54.689283 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:49:54.689293 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:49:54.689308 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:49:54.689319 | orchestrator | 2026-02-28 00:49:54.689329 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2026-02-28 00:49:54.689339 | orchestrator | Saturday 28 February 2026 00:48:33 +0000 (0:00:02.205) 0:01:26.124 ***** 2026-02-28 00:49:54.689349 | orchestrator | changed: [testbed-manager] 2026-02-28 00:49:54.689359 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:49:54.689368 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:49:54.689378 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:49:54.689388 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:49:54.689398 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:49:54.689408 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:49:54.689417 | orchestrator | 2026-02-28 00:49:54.689427 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-28 00:49:54.689443 | orchestrator | Saturday 28 February 2026 00:48:35 +0000 (0:00:01.624) 0:01:27.749 ***** 2026-02-28 00:49:54.689453 | orchestrator | 2026-02-28 00:49:54.689463 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-28 00:49:54.689472 | orchestrator | Saturday 28 February 2026 00:48:35 +0000 (0:00:00.072) 0:01:27.822 ***** 2026-02-28 00:49:54.689482 | orchestrator | 2026-02-28 00:49:54.689492 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-28 00:49:54.689502 | orchestrator | Saturday 28 February 2026 00:48:35 +0000 (0:00:00.085) 0:01:27.907 ***** 2026-02-28 00:49:54.689512 | orchestrator | 2026-02-28 00:49:54.689522 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-28 00:49:54.689532 | orchestrator | Saturday 28 February 2026 00:48:35 +0000 (0:00:00.303) 0:01:28.211 ***** 2026-02-28 00:49:54.689542 | orchestrator | 2026-02-28 00:49:54.689551 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-28 00:49:54.689561 | orchestrator | Saturday 28 February 2026 00:48:35 +0000 (0:00:00.069) 0:01:28.280 ***** 2026-02-28 00:49:54.689571 | orchestrator | 2026-02-28 00:49:54.689581 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-28 00:49:54.689591 | orchestrator | Saturday 28 February 2026 00:48:35 +0000 (0:00:00.066) 0:01:28.346 ***** 2026-02-28 00:49:54.689601 | orchestrator | 2026-02-28 00:49:54.689609 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-28 00:49:54.689617 | orchestrator | Saturday 28 February 2026 00:48:35 +0000 (0:00:00.065) 0:01:28.411 ***** 2026-02-28 00:49:54.689625 | orchestrator | 2026-02-28 00:49:54.689633 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2026-02-28 00:49:54.689641 | orchestrator | Saturday 28 February 2026 00:48:35 +0000 (0:00:00.097) 0:01:28.509 ***** 2026-02-28 00:49:54.689649 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:49:54.689657 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:49:54.689665 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:49:54.689673 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:49:54.689681 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:49:54.689689 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:49:54.689696 | orchestrator | changed: [testbed-manager] 2026-02-28 00:49:54.689704 | orchestrator | 2026-02-28 00:49:54.689712 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2026-02-28 00:49:54.689723 | orchestrator | Saturday 28 February 2026 00:49:09 +0000 (0:00:33.993) 0:02:02.503 ***** 2026-02-28 00:49:54.689737 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:49:54.689751 | orchestrator | changed: [testbed-manager] 2026-02-28 00:49:54.689764 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:49:54.689777 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:49:54.689791 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:49:54.689804 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:49:54.689819 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:49:54.689831 | orchestrator | 2026-02-28 00:49:54.689844 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2026-02-28 00:49:54.689858 | orchestrator | Saturday 28 February 2026 00:49:40 +0000 (0:00:30.257) 0:02:32.761 ***** 2026-02-28 00:49:54.689873 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:49:54.689888 | orchestrator | ok: [testbed-manager] 2026-02-28 00:49:54.689902 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:49:54.689917 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:49:54.689927 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:49:54.689935 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:49:54.689943 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:49:54.689951 | orchestrator | 2026-02-28 00:49:54.689959 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2026-02-28 00:49:54.689967 | orchestrator | Saturday 28 February 2026 00:49:42 +0000 (0:00:02.481) 0:02:35.242 ***** 2026-02-28 00:49:54.689985 | orchestrator | changed: [testbed-manager] 2026-02-28 00:49:54.689994 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:49:54.690009 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:49:54.690068 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:49:54.690077 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:49:54.690085 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:49:54.690095 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:49:54.690103 | orchestrator | 2026-02-28 00:49:54.690111 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-28 00:49:54.690120 | orchestrator | testbed-manager : ok=24  changed=16  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-28 00:49:54.690130 | orchestrator | testbed-node-0 : ok=20  changed=16  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-28 00:49:54.690138 | orchestrator | testbed-node-1 : ok=20  changed=16  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-28 00:49:54.690146 | orchestrator | testbed-node-2 : ok=20  changed=16  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-28 00:49:54.690161 | orchestrator | testbed-node-3 : ok=20  changed=16  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-28 00:49:54.690170 | orchestrator | testbed-node-4 : ok=20  changed=16  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-28 00:49:54.690178 | orchestrator | testbed-node-5 : ok=20  changed=16  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-28 00:49:54.690186 | orchestrator | 2026-02-28 00:49:54.690195 | orchestrator | 2026-02-28 00:49:54.690203 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-28 00:49:54.690211 | orchestrator | Saturday 28 February 2026 00:49:52 +0000 (0:00:09.570) 0:02:44.813 ***** 2026-02-28 00:49:54.690219 | orchestrator | =============================================================================== 2026-02-28 00:49:54.690227 | orchestrator | common : Restart fluentd container ------------------------------------- 33.99s 2026-02-28 00:49:54.690236 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 30.26s 2026-02-28 00:49:54.690244 | orchestrator | common : Restart cron container ----------------------------------------- 9.57s 2026-02-28 00:49:54.690252 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 9.52s 2026-02-28 00:49:54.690260 | orchestrator | common : Copying over config.json files for services -------------------- 7.25s 2026-02-28 00:49:54.690269 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 6.36s 2026-02-28 00:49:54.690277 | orchestrator | service-check-containers : common | Check containers -------------------- 5.38s 2026-02-28 00:49:54.690285 | orchestrator | common : Copying over cron logrotate config file ------------------------ 4.98s 2026-02-28 00:49:54.690293 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 4.84s 2026-02-28 00:49:54.690301 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 4.61s 2026-02-28 00:49:54.690310 | orchestrator | common : Copying over kolla.target -------------------------------------- 4.42s 2026-02-28 00:49:54.690318 | orchestrator | common : Ensuring config directories exist ------------------------------ 4.26s 2026-02-28 00:49:54.690326 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 4.08s 2026-02-28 00:49:54.690334 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 3.20s 2026-02-28 00:49:54.690342 | orchestrator | service-cert-copy : common | Copying over backend internal TLS certificate --- 3.01s 2026-02-28 00:49:54.690350 | orchestrator | service-check-containers : Include tasks -------------------------------- 2.63s 2026-02-28 00:49:54.690358 | orchestrator | common : Initializing toolbox container using normal user --------------- 2.48s 2026-02-28 00:49:54.690373 | orchestrator | common : Ensuring config directories have correct owner and permission --- 2.32s 2026-02-28 00:49:54.690381 | orchestrator | common : Find custom fluentd format config files ------------------------ 2.26s 2026-02-28 00:49:54.690389 | orchestrator | common : Creating log volume -------------------------------------------- 2.21s 2026-02-28 00:49:54.690397 | orchestrator | 2026-02-28 00:49:54 | INFO  | Task 13862bea-e16b-451e-ac40-7439e7f240d1 is in state STARTED 2026-02-28 00:49:54.690406 | orchestrator | 2026-02-28 00:49:54 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:49:57.721504 | orchestrator | 2026-02-28 00:49:57 | INFO  | Task f2d296b6-0e33-4574-85a0-ce15e7bf129b is in state STARTED 2026-02-28 00:49:57.721632 | orchestrator | 2026-02-28 00:49:57 | INFO  | Task b2f0adc8-00a3-44c1-83b2-220a2dc95ee8 is in state STARTED 2026-02-28 00:49:57.726432 | orchestrator | 2026-02-28 00:49:57 | INFO  | Task 9a7311e2-a5c2-4948-806c-9b5dd47efc42 is in state STARTED 2026-02-28 00:49:57.726697 | orchestrator | 2026-02-28 00:49:57 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:49:57.727490 | orchestrator | 2026-02-28 00:49:57 | INFO  | Task 9360a8a3-309e-413e-b1d7-52e10f7ee700 is in state STARTED 2026-02-28 00:49:57.731363 | orchestrator | 2026-02-28 00:49:57 | INFO  | Task 13862bea-e16b-451e-ac40-7439e7f240d1 is in state STARTED 2026-02-28 00:49:57.731458 | orchestrator | 2026-02-28 00:49:57 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:50:00.771520 | orchestrator | 2026-02-28 00:50:00 | INFO  | Task f2d296b6-0e33-4574-85a0-ce15e7bf129b is in state STARTED 2026-02-28 00:50:00.771740 | orchestrator | 2026-02-28 00:50:00 | INFO  | Task b2f0adc8-00a3-44c1-83b2-220a2dc95ee8 is in state STARTED 2026-02-28 00:50:00.772706 | orchestrator | 2026-02-28 00:50:00 | INFO  | Task 9a7311e2-a5c2-4948-806c-9b5dd47efc42 is in state STARTED 2026-02-28 00:50:00.773329 | orchestrator | 2026-02-28 00:50:00 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:50:00.776571 | orchestrator | 2026-02-28 00:50:00 | INFO  | Task 9360a8a3-309e-413e-b1d7-52e10f7ee700 is in state STARTED 2026-02-28 00:50:00.776911 | orchestrator | 2026-02-28 00:50:00 | INFO  | Task 13862bea-e16b-451e-ac40-7439e7f240d1 is in state STARTED 2026-02-28 00:50:00.776941 | orchestrator | 2026-02-28 00:50:00 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:50:03.821248 | orchestrator | 2026-02-28 00:50:03 | INFO  | Task f2d296b6-0e33-4574-85a0-ce15e7bf129b is in state STARTED 2026-02-28 00:50:03.822999 | orchestrator | 2026-02-28 00:50:03 | INFO  | Task b2f0adc8-00a3-44c1-83b2-220a2dc95ee8 is in state STARTED 2026-02-28 00:50:03.825214 | orchestrator | 2026-02-28 00:50:03 | INFO  | Task 9a7311e2-a5c2-4948-806c-9b5dd47efc42 is in state STARTED 2026-02-28 00:50:03.826651 | orchestrator | 2026-02-28 00:50:03 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:50:03.829091 | orchestrator | 2026-02-28 00:50:03 | INFO  | Task 9360a8a3-309e-413e-b1d7-52e10f7ee700 is in state STARTED 2026-02-28 00:50:03.832605 | orchestrator | 2026-02-28 00:50:03 | INFO  | Task 13862bea-e16b-451e-ac40-7439e7f240d1 is in state STARTED 2026-02-28 00:50:03.832642 | orchestrator | 2026-02-28 00:50:03 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:50:06.866515 | orchestrator | 2026-02-28 00:50:06 | INFO  | Task f2d296b6-0e33-4574-85a0-ce15e7bf129b is in state STARTED 2026-02-28 00:50:06.868309 | orchestrator | 2026-02-28 00:50:06 | INFO  | Task b2f0adc8-00a3-44c1-83b2-220a2dc95ee8 is in state STARTED 2026-02-28 00:50:06.871834 | orchestrator | 2026-02-28 00:50:06 | INFO  | Task 9a7311e2-a5c2-4948-806c-9b5dd47efc42 is in state STARTED 2026-02-28 00:50:06.872465 | orchestrator | 2026-02-28 00:50:06 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:50:06.873483 | orchestrator | 2026-02-28 00:50:06 | INFO  | Task 9360a8a3-309e-413e-b1d7-52e10f7ee700 is in state STARTED 2026-02-28 00:50:06.875535 | orchestrator | 2026-02-28 00:50:06 | INFO  | Task 13862bea-e16b-451e-ac40-7439e7f240d1 is in state STARTED 2026-02-28 00:50:06.875570 | orchestrator | 2026-02-28 00:50:06 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:50:09.905344 | orchestrator | 2026-02-28 00:50:09 | INFO  | Task f2d296b6-0e33-4574-85a0-ce15e7bf129b is in state STARTED 2026-02-28 00:50:09.905773 | orchestrator | 2026-02-28 00:50:09 | INFO  | Task b2f0adc8-00a3-44c1-83b2-220a2dc95ee8 is in state STARTED 2026-02-28 00:50:09.908031 | orchestrator | 2026-02-28 00:50:09 | INFO  | Task 9a7311e2-a5c2-4948-806c-9b5dd47efc42 is in state STARTED 2026-02-28 00:50:09.908938 | orchestrator | 2026-02-28 00:50:09 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:50:09.909871 | orchestrator | 2026-02-28 00:50:09 | INFO  | Task 9360a8a3-309e-413e-b1d7-52e10f7ee700 is in state STARTED 2026-02-28 00:50:09.911339 | orchestrator | 2026-02-28 00:50:09 | INFO  | Task 13862bea-e16b-451e-ac40-7439e7f240d1 is in state STARTED 2026-02-28 00:50:09.911434 | orchestrator | 2026-02-28 00:50:09 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:50:12.966271 | orchestrator | 2026-02-28 00:50:12 | INFO  | Task f2d296b6-0e33-4574-85a0-ce15e7bf129b is in state STARTED 2026-02-28 00:50:12.966884 | orchestrator | 2026-02-28 00:50:12 | INFO  | Task b2f0adc8-00a3-44c1-83b2-220a2dc95ee8 is in state STARTED 2026-02-28 00:50:12.967807 | orchestrator | 2026-02-28 00:50:12 | INFO  | Task 9a7311e2-a5c2-4948-806c-9b5dd47efc42 is in state STARTED 2026-02-28 00:50:12.968569 | orchestrator | 2026-02-28 00:50:12 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:50:12.969471 | orchestrator | 2026-02-28 00:50:12 | INFO  | Task 9360a8a3-309e-413e-b1d7-52e10f7ee700 is in state STARTED 2026-02-28 00:50:12.970816 | orchestrator | 2026-02-28 00:50:12 | INFO  | Task 13862bea-e16b-451e-ac40-7439e7f240d1 is in state STARTED 2026-02-28 00:50:12.970880 | orchestrator | 2026-02-28 00:50:12 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:50:16.073892 | orchestrator | 2026-02-28 00:50:16 | INFO  | Task f2d296b6-0e33-4574-85a0-ce15e7bf129b is in state STARTED 2026-02-28 00:50:16.074934 | orchestrator | 2026-02-28 00:50:16 | INFO  | Task b2f0adc8-00a3-44c1-83b2-220a2dc95ee8 is in state STARTED 2026-02-28 00:50:16.074983 | orchestrator | 2026-02-28 00:50:16 | INFO  | Task 9a7311e2-a5c2-4948-806c-9b5dd47efc42 is in state STARTED 2026-02-28 00:50:16.076050 | orchestrator | 2026-02-28 00:50:16 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:50:16.076349 | orchestrator | 2026-02-28 00:50:16 | INFO  | Task 9360a8a3-309e-413e-b1d7-52e10f7ee700 is in state STARTED 2026-02-28 00:50:16.077192 | orchestrator | 2026-02-28 00:50:16 | INFO  | Task 13862bea-e16b-451e-ac40-7439e7f240d1 is in state STARTED 2026-02-28 00:50:16.077231 | orchestrator | 2026-02-28 00:50:16 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:50:19.206419 | orchestrator | 2026-02-28 00:50:19.206510 | orchestrator | 2026-02-28 00:50:19.206539 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-28 00:50:19.206553 | orchestrator | 2026-02-28 00:50:19.206565 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-28 00:50:19.206576 | orchestrator | Saturday 28 February 2026 00:50:00 +0000 (0:00:00.535) 0:00:00.536 ***** 2026-02-28 00:50:19.206587 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:50:19.206617 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:50:19.206629 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:50:19.206640 | orchestrator | 2026-02-28 00:50:19.206651 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-28 00:50:19.206662 | orchestrator | Saturday 28 February 2026 00:50:00 +0000 (0:00:00.484) 0:00:01.021 ***** 2026-02-28 00:50:19.206673 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2026-02-28 00:50:19.206686 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2026-02-28 00:50:19.206697 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2026-02-28 00:50:19.206708 | orchestrator | 2026-02-28 00:50:19.206719 | orchestrator | PLAY [Apply role memcached] **************************************************** 2026-02-28 00:50:19.206730 | orchestrator | 2026-02-28 00:50:19.206741 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2026-02-28 00:50:19.206752 | orchestrator | Saturday 28 February 2026 00:50:01 +0000 (0:00:00.695) 0:00:01.717 ***** 2026-02-28 00:50:19.206763 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:50:19.206775 | orchestrator | 2026-02-28 00:50:19.206786 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2026-02-28 00:50:19.206797 | orchestrator | Saturday 28 February 2026 00:50:02 +0000 (0:00:01.088) 0:00:02.805 ***** 2026-02-28 00:50:19.206808 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-02-28 00:50:19.206819 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-02-28 00:50:19.206830 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-02-28 00:50:19.206841 | orchestrator | 2026-02-28 00:50:19.206852 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2026-02-28 00:50:19.206862 | orchestrator | Saturday 28 February 2026 00:50:03 +0000 (0:00:01.272) 0:00:04.078 ***** 2026-02-28 00:50:19.206873 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-02-28 00:50:19.206884 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-02-28 00:50:19.206895 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-02-28 00:50:19.206906 | orchestrator | 2026-02-28 00:50:19.206916 | orchestrator | TASK [service-check-containers : memcached | Check containers] ***************** 2026-02-28 00:50:19.206927 | orchestrator | Saturday 28 February 2026 00:50:06 +0000 (0:00:03.000) 0:00:07.079 ***** 2026-02-28 00:50:19.206943 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2025.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-02-28 00:50:19.206958 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2025.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-02-28 00:50:19.207006 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2025.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-02-28 00:50:19.207029 | orchestrator | 2026-02-28 00:50:19.207049 | orchestrator | TASK [service-check-containers : memcached | Notify handlers to restart containers] *** 2026-02-28 00:50:19.207062 | orchestrator | Saturday 28 February 2026 00:50:07 +0000 (0:00:01.112) 0:00:08.191 ***** 2026-02-28 00:50:19.207075 | orchestrator | changed: [testbed-node-1] => { 2026-02-28 00:50:19.207113 | orchestrator |  "msg": "Notifying handlers" 2026-02-28 00:50:19.207127 | orchestrator | } 2026-02-28 00:50:19.207140 | orchestrator | changed: [testbed-node-0] => { 2026-02-28 00:50:19.207153 | orchestrator |  "msg": "Notifying handlers" 2026-02-28 00:50:19.207165 | orchestrator | } 2026-02-28 00:50:19.207178 | orchestrator | changed: [testbed-node-2] => { 2026-02-28 00:50:19.207191 | orchestrator |  "msg": "Notifying handlers" 2026-02-28 00:50:19.207203 | orchestrator | } 2026-02-28 00:50:19.207216 | orchestrator | 2026-02-28 00:50:19.207229 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-28 00:50:19.207241 | orchestrator | Saturday 28 February 2026 00:50:08 +0000 (0:00:00.523) 0:00:08.714 ***** 2026-02-28 00:50:19.207252 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2025.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-02-28 00:50:19.207264 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2025.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-02-28 00:50:19.207276 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:50:19.207287 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:50:19.207298 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2025.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-02-28 00:50:19.207316 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:50:19.207327 | orchestrator | 2026-02-28 00:50:19.207338 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2026-02-28 00:50:19.207349 | orchestrator | Saturday 28 February 2026 00:50:10 +0000 (0:00:02.358) 0:00:11.073 ***** 2026-02-28 00:50:19.207360 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:50:19.207371 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:50:19.207382 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:50:19.207392 | orchestrator | 2026-02-28 00:50:19.207403 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-28 00:50:19.207415 | orchestrator | testbed-node-0 : ok=8  changed=5  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-28 00:50:19.207427 | orchestrator | testbed-node-1 : ok=8  changed=5  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-28 00:50:19.207438 | orchestrator | testbed-node-2 : ok=8  changed=5  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-28 00:50:19.207449 | orchestrator | 2026-02-28 00:50:19.207460 | orchestrator | 2026-02-28 00:50:19.207471 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-28 00:50:19.207481 | orchestrator | Saturday 28 February 2026 00:50:14 +0000 (0:00:03.339) 0:00:14.412 ***** 2026-02-28 00:50:19.207499 | orchestrator | =============================================================================== 2026-02-28 00:50:19.207516 | orchestrator | memcached : Restart memcached container --------------------------------- 3.34s 2026-02-28 00:50:19.207527 | orchestrator | memcached : Copying over config.json files for services ----------------- 3.00s 2026-02-28 00:50:19.207538 | orchestrator | service-check-containers : Include tasks -------------------------------- 2.36s 2026-02-28 00:50:19.207553 | orchestrator | memcached : Ensuring config directories exist --------------------------- 1.27s 2026-02-28 00:50:19.207579 | orchestrator | service-check-containers : memcached | Check containers ----------------- 1.11s 2026-02-28 00:50:19.207601 | orchestrator | memcached : include_tasks ----------------------------------------------- 1.09s 2026-02-28 00:50:19.207619 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.70s 2026-02-28 00:50:19.207637 | orchestrator | service-check-containers : memcached | Notify handlers to restart containers --- 0.52s 2026-02-28 00:50:19.207656 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.48s 2026-02-28 00:50:19.207675 | orchestrator | 2026-02-28 00:50:19 | INFO  | Task f2d296b6-0e33-4574-85a0-ce15e7bf129b is in state STARTED 2026-02-28 00:50:19.207694 | orchestrator | 2026-02-28 00:50:19 | INFO  | Task b2f0adc8-00a3-44c1-83b2-220a2dc95ee8 is in state SUCCESS 2026-02-28 00:50:19.207713 | orchestrator | 2026-02-28 00:50:19 | INFO  | Task 9a7311e2-a5c2-4948-806c-9b5dd47efc42 is in state STARTED 2026-02-28 00:50:19.207732 | orchestrator | 2026-02-28 00:50:19 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:50:19.207751 | orchestrator | 2026-02-28 00:50:19 | INFO  | Task 9360a8a3-309e-413e-b1d7-52e10f7ee700 is in state STARTED 2026-02-28 00:50:19.207763 | orchestrator | 2026-02-28 00:50:19 | INFO  | Task 62b32f35-7e9c-46e5-ad32-af6d8892e7f1 is in state STARTED 2026-02-28 00:50:19.207773 | orchestrator | 2026-02-28 00:50:19 | INFO  | Task 13862bea-e16b-451e-ac40-7439e7f240d1 is in state STARTED 2026-02-28 00:50:19.207785 | orchestrator | 2026-02-28 00:50:19 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:50:22.263687 | orchestrator | 2026-02-28 00:50:22 | INFO  | Task f2d296b6-0e33-4574-85a0-ce15e7bf129b is in state STARTED 2026-02-28 00:50:22.264485 | orchestrator | 2026-02-28 00:50:22 | INFO  | Task 9a7311e2-a5c2-4948-806c-9b5dd47efc42 is in state STARTED 2026-02-28 00:50:22.265057 | orchestrator | 2026-02-28 00:50:22 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:50:22.265803 | orchestrator | 2026-02-28 00:50:22 | INFO  | Task 9360a8a3-309e-413e-b1d7-52e10f7ee700 is in state STARTED 2026-02-28 00:50:22.266339 | orchestrator | 2026-02-28 00:50:22 | INFO  | Task 62b32f35-7e9c-46e5-ad32-af6d8892e7f1 is in state STARTED 2026-02-28 00:50:22.267239 | orchestrator | 2026-02-28 00:50:22 | INFO  | Task 13862bea-e16b-451e-ac40-7439e7f240d1 is in state STARTED 2026-02-28 00:50:22.267327 | orchestrator | 2026-02-28 00:50:22 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:50:25.369358 | orchestrator | 2026-02-28 00:50:25 | INFO  | Task f2d296b6-0e33-4574-85a0-ce15e7bf129b is in state STARTED 2026-02-28 00:50:25.369488 | orchestrator | 2026-02-28 00:50:25 | INFO  | Task 9a7311e2-a5c2-4948-806c-9b5dd47efc42 is in state STARTED 2026-02-28 00:50:25.370426 | orchestrator | 2026-02-28 00:50:25 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:50:25.370513 | orchestrator | 2026-02-28 00:50:25 | INFO  | Task 9360a8a3-309e-413e-b1d7-52e10f7ee700 is in state STARTED 2026-02-28 00:50:25.371078 | orchestrator | 2026-02-28 00:50:25 | INFO  | Task 62b32f35-7e9c-46e5-ad32-af6d8892e7f1 is in state STARTED 2026-02-28 00:50:25.372015 | orchestrator | 2026-02-28 00:50:25 | INFO  | Task 13862bea-e16b-451e-ac40-7439e7f240d1 is in state STARTED 2026-02-28 00:50:25.372897 | orchestrator | 2026-02-28 00:50:25 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:50:28.642254 | orchestrator | 2026-02-28 00:50:28 | INFO  | Task f2d296b6-0e33-4574-85a0-ce15e7bf129b is in state STARTED 2026-02-28 00:50:28.643628 | orchestrator | 2026-02-28 00:50:28 | INFO  | Task 9a7311e2-a5c2-4948-806c-9b5dd47efc42 is in state STARTED 2026-02-28 00:50:28.647904 | orchestrator | 2026-02-28 00:50:28 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:50:28.649851 | orchestrator | 2026-02-28 00:50:28 | INFO  | Task 9360a8a3-309e-413e-b1d7-52e10f7ee700 is in state STARTED 2026-02-28 00:50:28.653823 | orchestrator | 2026-02-28 00:50:28 | INFO  | Task 62b32f35-7e9c-46e5-ad32-af6d8892e7f1 is in state STARTED 2026-02-28 00:50:28.654484 | orchestrator | 2026-02-28 00:50:28 | INFO  | Task 13862bea-e16b-451e-ac40-7439e7f240d1 is in state STARTED 2026-02-28 00:50:28.654528 | orchestrator | 2026-02-28 00:50:28 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:50:31.692239 | orchestrator | 2026-02-28 00:50:31 | INFO  | Task f2d296b6-0e33-4574-85a0-ce15e7bf129b is in state STARTED 2026-02-28 00:50:31.693772 | orchestrator | 2026-02-28 00:50:31 | INFO  | Task 9a7311e2-a5c2-4948-806c-9b5dd47efc42 is in state STARTED 2026-02-28 00:50:31.695762 | orchestrator | 2026-02-28 00:50:31 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:50:31.696798 | orchestrator | 2026-02-28 00:50:31 | INFO  | Task 9360a8a3-309e-413e-b1d7-52e10f7ee700 is in state STARTED 2026-02-28 00:50:31.698255 | orchestrator | 2026-02-28 00:50:31 | INFO  | Task 62b32f35-7e9c-46e5-ad32-af6d8892e7f1 is in state STARTED 2026-02-28 00:50:31.699246 | orchestrator | 2026-02-28 00:50:31 | INFO  | Task 13862bea-e16b-451e-ac40-7439e7f240d1 is in state STARTED 2026-02-28 00:50:31.699288 | orchestrator | 2026-02-28 00:50:31 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:50:34.742411 | orchestrator | 2026-02-28 00:50:34 | INFO  | Task f2d296b6-0e33-4574-85a0-ce15e7bf129b is in state STARTED 2026-02-28 00:50:34.742697 | orchestrator | 2026-02-28 00:50:34 | INFO  | Task 9a7311e2-a5c2-4948-806c-9b5dd47efc42 is in state STARTED 2026-02-28 00:50:34.743741 | orchestrator | 2026-02-28 00:50:34 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:50:34.744453 | orchestrator | 2026-02-28 00:50:34 | INFO  | Task 9360a8a3-309e-413e-b1d7-52e10f7ee700 is in state STARTED 2026-02-28 00:50:34.745190 | orchestrator | 2026-02-28 00:50:34 | INFO  | Task 62b32f35-7e9c-46e5-ad32-af6d8892e7f1 is in state STARTED 2026-02-28 00:50:34.746070 | orchestrator | 2026-02-28 00:50:34 | INFO  | Task 13862bea-e16b-451e-ac40-7439e7f240d1 is in state STARTED 2026-02-28 00:50:34.746136 | orchestrator | 2026-02-28 00:50:34 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:50:37.835291 | orchestrator | 2026-02-28 00:50:37 | INFO  | Task f2d296b6-0e33-4574-85a0-ce15e7bf129b is in state STARTED 2026-02-28 00:50:37.838291 | orchestrator | 2026-02-28 00:50:37 | INFO  | Task 9a7311e2-a5c2-4948-806c-9b5dd47efc42 is in state STARTED 2026-02-28 00:50:37.838360 | orchestrator | 2026-02-28 00:50:37 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:50:37.838383 | orchestrator | 2026-02-28 00:50:37 | INFO  | Task 9360a8a3-309e-413e-b1d7-52e10f7ee700 is in state STARTED 2026-02-28 00:50:37.845047 | orchestrator | 2026-02-28 00:50:37 | INFO  | Task 62b32f35-7e9c-46e5-ad32-af6d8892e7f1 is in state STARTED 2026-02-28 00:50:37.852336 | orchestrator | 2026-02-28 00:50:37 | INFO  | Task 13862bea-e16b-451e-ac40-7439e7f240d1 is in state STARTED 2026-02-28 00:50:37.856146 | orchestrator | 2026-02-28 00:50:37 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:50:40.903970 | orchestrator | 2026-02-28 00:50:40 | INFO  | Task f2d296b6-0e33-4574-85a0-ce15e7bf129b is in state STARTED 2026-02-28 00:50:40.904425 | orchestrator | 2026-02-28 00:50:40 | INFO  | Task 9a7311e2-a5c2-4948-806c-9b5dd47efc42 is in state STARTED 2026-02-28 00:50:40.905363 | orchestrator | 2026-02-28 00:50:40 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:50:40.913502 | orchestrator | 2026-02-28 00:50:40 | INFO  | Task 9360a8a3-309e-413e-b1d7-52e10f7ee700 is in state STARTED 2026-02-28 00:50:40.916747 | orchestrator | 2026-02-28 00:50:40 | INFO  | Task 62b32f35-7e9c-46e5-ad32-af6d8892e7f1 is in state STARTED 2026-02-28 00:50:40.919005 | orchestrator | 2026-02-28 00:50:40 | INFO  | Task 13862bea-e16b-451e-ac40-7439e7f240d1 is in state SUCCESS 2026-02-28 00:50:40.919649 | orchestrator | 2026-02-28 00:50:40.919685 | orchestrator | 2026-02-28 00:50:40.919695 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-28 00:50:40.919705 | orchestrator | 2026-02-28 00:50:40.919714 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-28 00:50:40.919723 | orchestrator | Saturday 28 February 2026 00:50:00 +0000 (0:00:00.472) 0:00:00.472 ***** 2026-02-28 00:50:40.919732 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:50:40.919743 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:50:40.919751 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:50:40.919760 | orchestrator | 2026-02-28 00:50:40.919772 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-28 00:50:40.919780 | orchestrator | Saturday 28 February 2026 00:50:00 +0000 (0:00:00.644) 0:00:01.117 ***** 2026-02-28 00:50:40.919789 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2026-02-28 00:50:40.919799 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2026-02-28 00:50:40.919809 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2026-02-28 00:50:40.919818 | orchestrator | 2026-02-28 00:50:40.919828 | orchestrator | PLAY [Apply role redis] ******************************************************** 2026-02-28 00:50:40.919837 | orchestrator | 2026-02-28 00:50:40.919846 | orchestrator | TASK [redis : include_tasks] *************************************************** 2026-02-28 00:50:40.919881 | orchestrator | Saturday 28 February 2026 00:50:01 +0000 (0:00:00.825) 0:00:01.943 ***** 2026-02-28 00:50:40.919891 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:50:40.919901 | orchestrator | 2026-02-28 00:50:40.919910 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2026-02-28 00:50:40.919919 | orchestrator | Saturday 28 February 2026 00:50:02 +0000 (0:00:00.966) 0:00:02.910 ***** 2026-02-28 00:50:40.919931 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-28 00:50:40.919961 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-28 00:50:40.919971 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-28 00:50:40.919982 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-28 00:50:40.920004 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-28 00:50:40.920014 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-28 00:50:40.920033 | orchestrator | 2026-02-28 00:50:40.920042 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2026-02-28 00:50:40.920056 | orchestrator | Saturday 28 February 2026 00:50:04 +0000 (0:00:01.653) 0:00:04.564 ***** 2026-02-28 00:50:40.920066 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-28 00:50:40.920076 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-28 00:50:40.920086 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-28 00:50:40.920095 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-28 00:50:40.920164 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-28 00:50:40.920187 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-28 00:50:40.920207 | orchestrator | 2026-02-28 00:50:40.920218 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2026-02-28 00:50:40.920227 | orchestrator | Saturday 28 February 2026 00:50:07 +0000 (0:00:03.328) 0:00:07.892 ***** 2026-02-28 00:50:40.920245 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-28 00:50:40.920255 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-28 00:50:40.920264 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-28 00:50:40.920272 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-28 00:50:40.920281 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-28 00:50:40.920298 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-28 00:50:40.920314 | orchestrator | 2026-02-28 00:50:40.920323 | orchestrator | TASK [service-check-containers : redis | Check containers] ********************* 2026-02-28 00:50:40.920332 | orchestrator | Saturday 28 February 2026 00:50:11 +0000 (0:00:04.336) 0:00:12.229 ***** 2026-02-28 00:50:40.920345 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-28 00:50:40.920355 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-28 00:50:40.920365 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-28 00:50:40.920374 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-28 00:50:40.920383 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-28 00:50:40.920399 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-28 00:50:40.920417 | orchestrator | 2026-02-28 00:50:40.920427 | orchestrator | TASK [service-check-containers : redis | Notify handlers to restart containers] *** 2026-02-28 00:50:40.920438 | orchestrator | Saturday 28 February 2026 00:50:14 +0000 (0:00:02.616) 0:00:14.845 ***** 2026-02-28 00:50:40.920448 | orchestrator | changed: [testbed-node-0] => { 2026-02-28 00:50:40.920459 | orchestrator |  "msg": "Notifying handlers" 2026-02-28 00:50:40.920470 | orchestrator | } 2026-02-28 00:50:40.920480 | orchestrator | changed: [testbed-node-1] => { 2026-02-28 00:50:40.920490 | orchestrator |  "msg": "Notifying handlers" 2026-02-28 00:50:40.920501 | orchestrator | } 2026-02-28 00:50:40.920511 | orchestrator | changed: [testbed-node-2] => { 2026-02-28 00:50:40.920521 | orchestrator |  "msg": "Notifying handlers" 2026-02-28 00:50:40.920530 | orchestrator | } 2026-02-28 00:50:40.920540 | orchestrator | 2026-02-28 00:50:40.920549 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-28 00:50:40.920559 | orchestrator | Saturday 28 February 2026 00:50:15 +0000 (0:00:01.414) 0:00:16.260 ***** 2026-02-28 00:50:40.920585 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}})  2026-02-28 00:50:40.920597 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}})  2026-02-28 00:50:40.920608 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:50:40.920618 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}})  2026-02-28 00:50:40.920628 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}})  2026-02-28 00:50:40.920645 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:50:40.920655 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}})  2026-02-28 00:50:40.920670 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}})  2026-02-28 00:50:40.920680 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:50:40.920689 | orchestrator | 2026-02-28 00:50:40.920699 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-02-28 00:50:40.920708 | orchestrator | Saturday 28 February 2026 00:50:18 +0000 (0:00:02.018) 0:00:18.278 ***** 2026-02-28 00:50:40.920717 | orchestrator | 2026-02-28 00:50:40.920726 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-02-28 00:50:40.920736 | orchestrator | Saturday 28 February 2026 00:50:18 +0000 (0:00:00.293) 0:00:18.572 ***** 2026-02-28 00:50:40.920745 | orchestrator | 2026-02-28 00:50:40.920757 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-02-28 00:50:40.920767 | orchestrator | Saturday 28 February 2026 00:50:18 +0000 (0:00:00.245) 0:00:18.817 ***** 2026-02-28 00:50:40.920777 | orchestrator | 2026-02-28 00:50:40.920787 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2026-02-28 00:50:40.920796 | orchestrator | Saturday 28 February 2026 00:50:18 +0000 (0:00:00.198) 0:00:19.016 ***** 2026-02-28 00:50:40.920805 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:50:40.920815 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:50:40.920824 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:50:40.920833 | orchestrator | 2026-02-28 00:50:40.920842 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2026-02-28 00:50:40.920852 | orchestrator | Saturday 28 February 2026 00:50:28 +0000 (0:00:09.409) 0:00:28.426 ***** 2026-02-28 00:50:40.920861 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:50:40.920870 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:50:40.920879 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:50:40.920888 | orchestrator | 2026-02-28 00:50:40.920898 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-28 00:50:40.920908 | orchestrator | testbed-node-0 : ok=10  changed=7  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-28 00:50:40.920919 | orchestrator | testbed-node-1 : ok=10  changed=7  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-28 00:50:40.920928 | orchestrator | testbed-node-2 : ok=10  changed=7  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-28 00:50:40.920937 | orchestrator | 2026-02-28 00:50:40.920947 | orchestrator | 2026-02-28 00:50:40.920956 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-28 00:50:40.920971 | orchestrator | Saturday 28 February 2026 00:50:39 +0000 (0:00:11.505) 0:00:39.932 ***** 2026-02-28 00:50:40.920980 | orchestrator | =============================================================================== 2026-02-28 00:50:40.920989 | orchestrator | redis : Restart redis-sentinel container ------------------------------- 11.51s 2026-02-28 00:50:40.920999 | orchestrator | redis : Restart redis container ----------------------------------------- 9.41s 2026-02-28 00:50:40.921008 | orchestrator | redis : Copying over redis config files --------------------------------- 4.34s 2026-02-28 00:50:40.921017 | orchestrator | redis : Copying over default config.json files -------------------------- 3.33s 2026-02-28 00:50:40.921027 | orchestrator | service-check-containers : redis | Check containers --------------------- 2.62s 2026-02-28 00:50:40.921036 | orchestrator | service-check-containers : Include tasks -------------------------------- 2.02s 2026-02-28 00:50:40.921046 | orchestrator | redis : Ensuring config directories exist ------------------------------- 1.65s 2026-02-28 00:50:40.921055 | orchestrator | service-check-containers : redis | Notify handlers to restart containers --- 1.41s 2026-02-28 00:50:40.921064 | orchestrator | redis : include_tasks --------------------------------------------------- 0.96s 2026-02-28 00:50:40.921074 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.83s 2026-02-28 00:50:40.921083 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.74s 2026-02-28 00:50:40.921092 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.64s 2026-02-28 00:50:40.921130 | orchestrator | 2026-02-28 00:50:40 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:50:44.017618 | orchestrator | 2026-02-28 00:50:44 | INFO  | Task f2d296b6-0e33-4574-85a0-ce15e7bf129b is in state STARTED 2026-02-28 00:50:44.021627 | orchestrator | 2026-02-28 00:50:44 | INFO  | Task 9a7311e2-a5c2-4948-806c-9b5dd47efc42 is in state STARTED 2026-02-28 00:50:44.025173 | orchestrator | 2026-02-28 00:50:44 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:50:44.025985 | orchestrator | 2026-02-28 00:50:44 | INFO  | Task 9360a8a3-309e-413e-b1d7-52e10f7ee700 is in state STARTED 2026-02-28 00:50:44.027403 | orchestrator | 2026-02-28 00:50:44 | INFO  | Task 62b32f35-7e9c-46e5-ad32-af6d8892e7f1 is in state STARTED 2026-02-28 00:50:44.027450 | orchestrator | 2026-02-28 00:50:44 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:50:47.221758 | orchestrator | 2026-02-28 00:50:47 | INFO  | Task f2d296b6-0e33-4574-85a0-ce15e7bf129b is in state STARTED 2026-02-28 00:50:47.222617 | orchestrator | 2026-02-28 00:50:47 | INFO  | Task 9a7311e2-a5c2-4948-806c-9b5dd47efc42 is in state STARTED 2026-02-28 00:50:47.223695 | orchestrator | 2026-02-28 00:50:47 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:50:47.224733 | orchestrator | 2026-02-28 00:50:47 | INFO  | Task 9360a8a3-309e-413e-b1d7-52e10f7ee700 is in state STARTED 2026-02-28 00:50:47.226365 | orchestrator | 2026-02-28 00:50:47 | INFO  | Task 62b32f35-7e9c-46e5-ad32-af6d8892e7f1 is in state STARTED 2026-02-28 00:50:47.226444 | orchestrator | 2026-02-28 00:50:47 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:50:50.311078 | orchestrator | 2026-02-28 00:50:50 | INFO  | Task f2d296b6-0e33-4574-85a0-ce15e7bf129b is in state STARTED 2026-02-28 00:50:50.311166 | orchestrator | 2026-02-28 00:50:50 | INFO  | Task 9a7311e2-a5c2-4948-806c-9b5dd47efc42 is in state STARTED 2026-02-28 00:50:50.311503 | orchestrator | 2026-02-28 00:50:50 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:50:50.313229 | orchestrator | 2026-02-28 00:50:50 | INFO  | Task 9360a8a3-309e-413e-b1d7-52e10f7ee700 is in state STARTED 2026-02-28 00:50:50.313449 | orchestrator | 2026-02-28 00:50:50 | INFO  | Task 62b32f35-7e9c-46e5-ad32-af6d8892e7f1 is in state STARTED 2026-02-28 00:50:50.313546 | orchestrator | 2026-02-28 00:50:50 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:50:53.432800 | orchestrator | 2026-02-28 00:50:53 | INFO  | Task f2d296b6-0e33-4574-85a0-ce15e7bf129b is in state STARTED 2026-02-28 00:50:53.433634 | orchestrator | 2026-02-28 00:50:53 | INFO  | Task 9a7311e2-a5c2-4948-806c-9b5dd47efc42 is in state STARTED 2026-02-28 00:50:53.434616 | orchestrator | 2026-02-28 00:50:53 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:50:53.435452 | orchestrator | 2026-02-28 00:50:53 | INFO  | Task 9360a8a3-309e-413e-b1d7-52e10f7ee700 is in state STARTED 2026-02-28 00:50:53.441208 | orchestrator | 2026-02-28 00:50:53 | INFO  | Task 62b32f35-7e9c-46e5-ad32-af6d8892e7f1 is in state STARTED 2026-02-28 00:50:53.441281 | orchestrator | 2026-02-28 00:50:53 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:50:56.490477 | orchestrator | 2026-02-28 00:50:56 | INFO  | Task f2d296b6-0e33-4574-85a0-ce15e7bf129b is in state STARTED 2026-02-28 00:50:56.492826 | orchestrator | 2026-02-28 00:50:56 | INFO  | Task 9a7311e2-a5c2-4948-806c-9b5dd47efc42 is in state STARTED 2026-02-28 00:50:56.494007 | orchestrator | 2026-02-28 00:50:56 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:50:56.495306 | orchestrator | 2026-02-28 00:50:56 | INFO  | Task 9360a8a3-309e-413e-b1d7-52e10f7ee700 is in state STARTED 2026-02-28 00:50:56.496945 | orchestrator | 2026-02-28 00:50:56 | INFO  | Task 62b32f35-7e9c-46e5-ad32-af6d8892e7f1 is in state STARTED 2026-02-28 00:50:56.497304 | orchestrator | 2026-02-28 00:50:56 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:50:59.560265 | orchestrator | 2026-02-28 00:50:59 | INFO  | Task f2d296b6-0e33-4574-85a0-ce15e7bf129b is in state STARTED 2026-02-28 00:50:59.560357 | orchestrator | 2026-02-28 00:50:59 | INFO  | Task 9a7311e2-a5c2-4948-806c-9b5dd47efc42 is in state STARTED 2026-02-28 00:50:59.561466 | orchestrator | 2026-02-28 00:50:59 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:50:59.562369 | orchestrator | 2026-02-28 00:50:59 | INFO  | Task 9360a8a3-309e-413e-b1d7-52e10f7ee700 is in state STARTED 2026-02-28 00:50:59.563506 | orchestrator | 2026-02-28 00:50:59 | INFO  | Task 62b32f35-7e9c-46e5-ad32-af6d8892e7f1 is in state STARTED 2026-02-28 00:50:59.563539 | orchestrator | 2026-02-28 00:50:59 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:51:02.603857 | orchestrator | 2026-02-28 00:51:02 | INFO  | Task f2d296b6-0e33-4574-85a0-ce15e7bf129b is in state STARTED 2026-02-28 00:51:02.605827 | orchestrator | 2026-02-28 00:51:02 | INFO  | Task 9a7311e2-a5c2-4948-806c-9b5dd47efc42 is in state STARTED 2026-02-28 00:51:02.608243 | orchestrator | 2026-02-28 00:51:02 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:51:02.610097 | orchestrator | 2026-02-28 00:51:02 | INFO  | Task 9360a8a3-309e-413e-b1d7-52e10f7ee700 is in state STARTED 2026-02-28 00:51:02.611600 | orchestrator | 2026-02-28 00:51:02 | INFO  | Task 62b32f35-7e9c-46e5-ad32-af6d8892e7f1 is in state STARTED 2026-02-28 00:51:02.611644 | orchestrator | 2026-02-28 00:51:02 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:51:05.705949 | orchestrator | 2026-02-28 00:51:05 | INFO  | Task f2d296b6-0e33-4574-85a0-ce15e7bf129b is in state STARTED 2026-02-28 00:51:05.707561 | orchestrator | 2026-02-28 00:51:05 | INFO  | Task 9a7311e2-a5c2-4948-806c-9b5dd47efc42 is in state STARTED 2026-02-28 00:51:05.709176 | orchestrator | 2026-02-28 00:51:05 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:51:05.710998 | orchestrator | 2026-02-28 00:51:05 | INFO  | Task 9360a8a3-309e-413e-b1d7-52e10f7ee700 is in state STARTED 2026-02-28 00:51:05.712679 | orchestrator | 2026-02-28 00:51:05 | INFO  | Task 62b32f35-7e9c-46e5-ad32-af6d8892e7f1 is in state STARTED 2026-02-28 00:51:05.712945 | orchestrator | 2026-02-28 00:51:05 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:51:08.773357 | orchestrator | 2026-02-28 00:51:08 | INFO  | Task f2d296b6-0e33-4574-85a0-ce15e7bf129b is in state STARTED 2026-02-28 00:51:08.776248 | orchestrator | 2026-02-28 00:51:08 | INFO  | Task 9a7311e2-a5c2-4948-806c-9b5dd47efc42 is in state STARTED 2026-02-28 00:51:08.779940 | orchestrator | 2026-02-28 00:51:08 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:51:08.780330 | orchestrator | 2026-02-28 00:51:08 | INFO  | Task 9360a8a3-309e-413e-b1d7-52e10f7ee700 is in state STARTED 2026-02-28 00:51:08.781499 | orchestrator | 2026-02-28 00:51:08 | INFO  | Task 62b32f35-7e9c-46e5-ad32-af6d8892e7f1 is in state STARTED 2026-02-28 00:51:08.781532 | orchestrator | 2026-02-28 00:51:08 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:51:11.822527 | orchestrator | 2026-02-28 00:51:11 | INFO  | Task f2d296b6-0e33-4574-85a0-ce15e7bf129b is in state STARTED 2026-02-28 00:51:11.827593 | orchestrator | 2026-02-28 00:51:11 | INFO  | Task 9a7311e2-a5c2-4948-806c-9b5dd47efc42 is in state STARTED 2026-02-28 00:51:11.830081 | orchestrator | 2026-02-28 00:51:11 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:51:11.833019 | orchestrator | 2026-02-28 00:51:11 | INFO  | Task 9360a8a3-309e-413e-b1d7-52e10f7ee700 is in state STARTED 2026-02-28 00:51:11.835282 | orchestrator | 2026-02-28 00:51:11 | INFO  | Task 62b32f35-7e9c-46e5-ad32-af6d8892e7f1 is in state STARTED 2026-02-28 00:51:11.835330 | orchestrator | 2026-02-28 00:51:11 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:51:14.883147 | orchestrator | 2026-02-28 00:51:14 | INFO  | Task f2d296b6-0e33-4574-85a0-ce15e7bf129b is in state STARTED 2026-02-28 00:51:14.886851 | orchestrator | 2026-02-28 00:51:14 | INFO  | Task 9a7311e2-a5c2-4948-806c-9b5dd47efc42 is in state STARTED 2026-02-28 00:51:14.888801 | orchestrator | 2026-02-28 00:51:14 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:51:14.889604 | orchestrator | 2026-02-28 00:51:14 | INFO  | Task 9360a8a3-309e-413e-b1d7-52e10f7ee700 is in state STARTED 2026-02-28 00:51:14.890760 | orchestrator | 2026-02-28 00:51:14 | INFO  | Task 62b32f35-7e9c-46e5-ad32-af6d8892e7f1 is in state STARTED 2026-02-28 00:51:14.890795 | orchestrator | 2026-02-28 00:51:14 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:51:17.937516 | orchestrator | 2026-02-28 00:51:17 | INFO  | Task f2d296b6-0e33-4574-85a0-ce15e7bf129b is in state STARTED 2026-02-28 00:51:17.938545 | orchestrator | 2026-02-28 00:51:17 | INFO  | Task 9a7311e2-a5c2-4948-806c-9b5dd47efc42 is in state STARTED 2026-02-28 00:51:17.940011 | orchestrator | 2026-02-28 00:51:17 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:51:17.943135 | orchestrator | 2026-02-28 00:51:17 | INFO  | Task 9360a8a3-309e-413e-b1d7-52e10f7ee700 is in state STARTED 2026-02-28 00:51:17.946341 | orchestrator | 2026-02-28 00:51:17 | INFO  | Task 62b32f35-7e9c-46e5-ad32-af6d8892e7f1 is in state STARTED 2026-02-28 00:51:17.946750 | orchestrator | 2026-02-28 00:51:17 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:51:20.985820 | orchestrator | 2026-02-28 00:51:20 | INFO  | Task f2d296b6-0e33-4574-85a0-ce15e7bf129b is in state STARTED 2026-02-28 00:51:20.988352 | orchestrator | 2026-02-28 00:51:20 | INFO  | Task 9a7311e2-a5c2-4948-806c-9b5dd47efc42 is in state STARTED 2026-02-28 00:51:20.990284 | orchestrator | 2026-02-28 00:51:20 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:51:20.992052 | orchestrator | 2026-02-28 00:51:20 | INFO  | Task 9360a8a3-309e-413e-b1d7-52e10f7ee700 is in state STARTED 2026-02-28 00:51:20.993962 | orchestrator | 2026-02-28 00:51:20 | INFO  | Task 62b32f35-7e9c-46e5-ad32-af6d8892e7f1 is in state STARTED 2026-02-28 00:51:20.996015 | orchestrator | 2026-02-28 00:51:20 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:51:24.042987 | orchestrator | 2026-02-28 00:51:24 | INFO  | Task f2d296b6-0e33-4574-85a0-ce15e7bf129b is in state STARTED 2026-02-28 00:51:24.043725 | orchestrator | 2026-02-28 00:51:24 | INFO  | Task ba312156-fe42-4c28-ad1a-fa777ea78ebd is in state STARTED 2026-02-28 00:51:24.045773 | orchestrator | 2026-02-28 00:51:24 | INFO  | Task 9a7311e2-a5c2-4948-806c-9b5dd47efc42 is in state SUCCESS 2026-02-28 00:51:24.049076 | orchestrator | 2026-02-28 00:51:24.049114 | orchestrator | 2026-02-28 00:51:24.049120 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-28 00:51:24.049126 | orchestrator | 2026-02-28 00:51:24.049131 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-28 00:51:24.049136 | orchestrator | Saturday 28 February 2026 00:49:59 +0000 (0:00:00.442) 0:00:00.442 ***** 2026-02-28 00:51:24.049140 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:51:24.049146 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:51:24.049151 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:51:24.049160 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:51:24.049164 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:51:24.049169 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:51:24.049173 | orchestrator | 2026-02-28 00:51:24.049177 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-28 00:51:24.049216 | orchestrator | Saturday 28 February 2026 00:50:01 +0000 (0:00:01.538) 0:00:01.981 ***** 2026-02-28 00:51:24.049221 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-28 00:51:24.049226 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-28 00:51:24.049231 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-28 00:51:24.049235 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-28 00:51:24.049239 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-28 00:51:24.049244 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-28 00:51:24.049251 | orchestrator | 2026-02-28 00:51:24.049258 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2026-02-28 00:51:24.049265 | orchestrator | 2026-02-28 00:51:24.049273 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2026-02-28 00:51:24.049280 | orchestrator | Saturday 28 February 2026 00:50:02 +0000 (0:00:01.343) 0:00:03.324 ***** 2026-02-28 00:51:24.049288 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-28 00:51:24.049297 | orchestrator | 2026-02-28 00:51:24.049304 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-02-28 00:51:24.049312 | orchestrator | Saturday 28 February 2026 00:50:05 +0000 (0:00:02.708) 0:00:06.033 ***** 2026-02-28 00:51:24.049318 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-02-28 00:51:24.049326 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-02-28 00:51:24.049333 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-02-28 00:51:24.049339 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-02-28 00:51:24.049346 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-02-28 00:51:24.049370 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-02-28 00:51:24.049399 | orchestrator | 2026-02-28 00:51:24.049408 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-02-28 00:51:24.049415 | orchestrator | Saturday 28 February 2026 00:50:07 +0000 (0:00:02.301) 0:00:08.335 ***** 2026-02-28 00:51:24.049422 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-02-28 00:51:24.049431 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-02-28 00:51:24.049436 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-02-28 00:51:24.049440 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-02-28 00:51:24.049445 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-02-28 00:51:24.049449 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-02-28 00:51:24.049453 | orchestrator | 2026-02-28 00:51:24.049458 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-02-28 00:51:24.049462 | orchestrator | Saturday 28 February 2026 00:50:10 +0000 (0:00:03.185) 0:00:11.520 ***** 2026-02-28 00:51:24.049467 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2026-02-28 00:51:24.049471 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:51:24.049476 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2026-02-28 00:51:24.049480 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2026-02-28 00:51:24.049485 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:51:24.049489 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2026-02-28 00:51:24.049493 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:51:24.049498 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2026-02-28 00:51:24.049502 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:51:24.049506 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:51:24.049511 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2026-02-28 00:51:24.049515 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:51:24.049519 | orchestrator | 2026-02-28 00:51:24.049523 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2026-02-28 00:51:24.049528 | orchestrator | Saturday 28 February 2026 00:50:13 +0000 (0:00:02.648) 0:00:14.168 ***** 2026-02-28 00:51:24.049532 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:51:24.049536 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:51:24.049541 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:51:24.049545 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:51:24.049549 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:51:24.049554 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:51:24.049558 | orchestrator | 2026-02-28 00:51:24.049562 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2026-02-28 00:51:24.049567 | orchestrator | Saturday 28 February 2026 00:50:14 +0000 (0:00:01.393) 0:00:15.561 ***** 2026-02-28 00:51:24.049587 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-28 00:51:24.049596 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-28 00:51:24.049605 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-28 00:51:24.049611 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-28 00:51:24.049615 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-28 00:51:24.049620 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-28 00:51:24.049631 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-28 00:51:24.049636 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-28 00:51:24.049646 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-28 00:51:24.049651 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-28 00:51:24.049655 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-28 00:51:24.049663 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-28 00:51:24.049667 | orchestrator | 2026-02-28 00:51:24.049673 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2026-02-28 00:51:24.049678 | orchestrator | Saturday 28 February 2026 00:50:18 +0000 (0:00:03.325) 0:00:18.887 ***** 2026-02-28 00:51:24.049684 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-28 00:51:24.049696 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-28 00:51:24.049702 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-28 00:51:24.049707 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-28 00:51:24.049713 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-28 00:51:24.049723 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-28 00:51:24.049734 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-28 00:51:24.049739 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-28 00:51:24.049745 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-28 00:51:24.049750 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-28 00:51:24.049755 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-28 00:51:24.049768 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-28 00:51:24.049777 | orchestrator | 2026-02-28 00:51:24.049782 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2026-02-28 00:51:24.049787 | orchestrator | Saturday 28 February 2026 00:50:22 +0000 (0:00:04.645) 0:00:23.533 ***** 2026-02-28 00:51:24.049792 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:51:24.049798 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:51:24.049802 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:51:24.049808 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:51:24.049813 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:51:24.049818 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:51:24.049823 | orchestrator | 2026-02-28 00:51:24.049828 | orchestrator | TASK [service-check-containers : openvswitch | Check containers] *************** 2026-02-28 00:51:24.049833 | orchestrator | Saturday 28 February 2026 00:50:23 +0000 (0:00:01.309) 0:00:24.842 ***** 2026-02-28 00:51:24.049839 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-28 00:51:24.049844 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-28 00:51:24.049850 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-28 00:51:24.049855 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-28 00:51:24.049870 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-28 00:51:24.049876 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-28 00:51:24.049881 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-28 00:51:24.049886 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-28 00:51:24.049890 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-28 00:51:24.049895 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-28 00:51:24.049912 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-28 00:51:24.049917 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-28 00:51:24.049921 | orchestrator | 2026-02-28 00:51:24.049926 | orchestrator | TASK [service-check-containers : openvswitch | Notify handlers to restart containers] *** 2026-02-28 00:51:24.049931 | orchestrator | Saturday 28 February 2026 00:50:27 +0000 (0:00:03.540) 0:00:28.382 ***** 2026-02-28 00:51:24.049935 | orchestrator | changed: [testbed-node-0] => { 2026-02-28 00:51:24.049940 | orchestrator |  "msg": "Notifying handlers" 2026-02-28 00:51:24.049944 | orchestrator | } 2026-02-28 00:51:24.049949 | orchestrator | changed: [testbed-node-1] => { 2026-02-28 00:51:24.049953 | orchestrator |  "msg": "Notifying handlers" 2026-02-28 00:51:24.049958 | orchestrator | } 2026-02-28 00:51:24.049962 | orchestrator | changed: [testbed-node-2] => { 2026-02-28 00:51:24.049966 | orchestrator |  "msg": "Notifying handlers" 2026-02-28 00:51:24.049971 | orchestrator | } 2026-02-28 00:51:24.049975 | orchestrator | changed: [testbed-node-3] => { 2026-02-28 00:51:24.049979 | orchestrator |  "msg": "Notifying handlers" 2026-02-28 00:51:24.049984 | orchestrator | } 2026-02-28 00:51:24.049988 | orchestrator | changed: [testbed-node-4] => { 2026-02-28 00:51:24.049993 | orchestrator |  "msg": "Notifying handlers" 2026-02-28 00:51:24.049997 | orchestrator | } 2026-02-28 00:51:24.050001 | orchestrator | changed: [testbed-node-5] => { 2026-02-28 00:51:24.050006 | orchestrator |  "msg": "Notifying handlers" 2026-02-28 00:51:24.050010 | orchestrator | } 2026-02-28 00:51:24.050064 | orchestrator | 2026-02-28 00:51:24.050071 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-28 00:51:24.050076 | orchestrator | Saturday 28 February 2026 00:50:30 +0000 (0:00:03.430) 0:00:31.813 ***** 2026-02-28 00:51:24.050080 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-02-28 00:51:24.050089 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-02-28 00:51:24.050094 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:51:24.050102 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-02-28 00:51:24.050109 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-02-28 00:51:24.050114 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-02-28 00:51:24.050119 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:51:24.050123 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-02-28 00:51:24.050128 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-02-28 00:51:24.050136 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-02-28 00:51:24.050141 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:51:24.050145 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:51:24.050156 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-02-28 00:51:24.050161 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-02-28 00:51:24.050165 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:51:24.050170 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-02-28 00:51:24.050175 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-02-28 00:51:24.050290 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:51:24.050305 | orchestrator | 2026-02-28 00:51:24.050310 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-28 00:51:24.050315 | orchestrator | Saturday 28 February 2026 00:50:32 +0000 (0:00:01.813) 0:00:33.626 ***** 2026-02-28 00:51:24.050328 | orchestrator | 2026-02-28 00:51:24.050332 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-28 00:51:24.050337 | orchestrator | Saturday 28 February 2026 00:50:32 +0000 (0:00:00.144) 0:00:33.771 ***** 2026-02-28 00:51:24.050341 | orchestrator | 2026-02-28 00:51:24.050352 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-28 00:51:24.050356 | orchestrator | Saturday 28 February 2026 00:50:33 +0000 (0:00:00.160) 0:00:33.932 ***** 2026-02-28 00:51:24.050360 | orchestrator | 2026-02-28 00:51:24.050365 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-28 00:51:24.050369 | orchestrator | Saturday 28 February 2026 00:50:33 +0000 (0:00:00.207) 0:00:34.139 ***** 2026-02-28 00:51:24.050374 | orchestrator | 2026-02-28 00:51:24.050378 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-28 00:51:24.050382 | orchestrator | Saturday 28 February 2026 00:50:34 +0000 (0:00:00.922) 0:00:35.061 ***** 2026-02-28 00:51:24.050387 | orchestrator | 2026-02-28 00:51:24.050391 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-28 00:51:24.050395 | orchestrator | Saturday 28 February 2026 00:50:34 +0000 (0:00:00.246) 0:00:35.308 ***** 2026-02-28 00:51:24.050399 | orchestrator | 2026-02-28 00:51:24.050404 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2026-02-28 00:51:24.050408 | orchestrator | Saturday 28 February 2026 00:50:34 +0000 (0:00:00.183) 0:00:35.492 ***** 2026-02-28 00:51:24.050412 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:51:24.050417 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:51:24.050421 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:51:24.050425 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:51:24.050430 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:51:24.050434 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:51:24.050438 | orchestrator | 2026-02-28 00:51:24.050443 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2026-02-28 00:51:24.050454 | orchestrator | Saturday 28 February 2026 00:50:44 +0000 (0:00:10.365) 0:00:45.857 ***** 2026-02-28 00:51:24.050458 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:51:24.050463 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:51:24.050467 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:51:24.050472 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:51:24.050476 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:51:24.050480 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:51:24.050485 | orchestrator | 2026-02-28 00:51:24.050489 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-02-28 00:51:24.050514 | orchestrator | Saturday 28 February 2026 00:50:46 +0000 (0:00:01.603) 0:00:47.460 ***** 2026-02-28 00:51:24.050520 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:51:24.050524 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:51:24.050529 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:51:24.050533 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:51:24.050537 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:51:24.050542 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:51:24.050546 | orchestrator | 2026-02-28 00:51:24.050550 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2026-02-28 00:51:24.050555 | orchestrator | Saturday 28 February 2026 00:50:56 +0000 (0:00:09.647) 0:00:57.107 ***** 2026-02-28 00:51:24.050559 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2026-02-28 00:51:24.050568 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2026-02-28 00:51:24.050573 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2026-02-28 00:51:24.050577 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2026-02-28 00:51:24.050581 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2026-02-28 00:51:24.050586 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2026-02-28 00:51:24.050590 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2026-02-28 00:51:24.050595 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2026-02-28 00:51:24.050599 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2026-02-28 00:51:24.050603 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2026-02-28 00:51:24.050608 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2026-02-28 00:51:24.050612 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2026-02-28 00:51:24.050616 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-28 00:51:24.050621 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-28 00:51:24.050625 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-28 00:51:24.050635 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-28 00:51:24.050640 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-28 00:51:24.050644 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-28 00:51:24.050648 | orchestrator | 2026-02-28 00:51:24.050653 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2026-02-28 00:51:24.050657 | orchestrator | Saturday 28 February 2026 00:51:05 +0000 (0:00:09.622) 0:01:06.729 ***** 2026-02-28 00:51:24.050662 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2026-02-28 00:51:24.050666 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:51:24.050671 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2026-02-28 00:51:24.050675 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:51:24.050680 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2026-02-28 00:51:24.050684 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:51:24.050688 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2026-02-28 00:51:24.050693 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2026-02-28 00:51:24.050697 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2026-02-28 00:51:24.050702 | orchestrator | 2026-02-28 00:51:24.050706 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2026-02-28 00:51:24.050711 | orchestrator | Saturday 28 February 2026 00:51:08 +0000 (0:00:02.572) 0:01:09.302 ***** 2026-02-28 00:51:24.050715 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2026-02-28 00:51:24.050719 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:51:24.050724 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2026-02-28 00:51:24.050728 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:51:24.050736 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2026-02-28 00:51:24.050740 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:51:24.050745 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2026-02-28 00:51:24.050752 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2026-02-28 00:51:24.050757 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2026-02-28 00:51:24.050761 | orchestrator | 2026-02-28 00:51:24.050766 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-02-28 00:51:24.050770 | orchestrator | Saturday 28 February 2026 00:51:13 +0000 (0:00:04.724) 0:01:14.027 ***** 2026-02-28 00:51:24.050775 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:51:24.050782 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:51:24.050786 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:51:24.050790 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:51:24.050795 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:51:24.050799 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:51:24.050803 | orchestrator | 2026-02-28 00:51:24.050808 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-28 00:51:24.050813 | orchestrator | testbed-node-0 : ok=16  changed=12  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-28 00:51:24.050819 | orchestrator | testbed-node-1 : ok=16  changed=12  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-28 00:51:24.050823 | orchestrator | testbed-node-2 : ok=16  changed=12  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-28 00:51:24.050827 | orchestrator | testbed-node-3 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-28 00:51:24.050832 | orchestrator | testbed-node-4 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-28 00:51:24.050836 | orchestrator | testbed-node-5 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-28 00:51:24.050841 | orchestrator | 2026-02-28 00:51:24.050845 | orchestrator | 2026-02-28 00:51:24.050849 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-28 00:51:24.050854 | orchestrator | Saturday 28 February 2026 00:51:22 +0000 (0:00:08.964) 0:01:22.992 ***** 2026-02-28 00:51:24.050858 | orchestrator | =============================================================================== 2026-02-28 00:51:24.050863 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 18.61s 2026-02-28 00:51:24.050867 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------ 10.37s 2026-02-28 00:51:24.050871 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 9.62s 2026-02-28 00:51:24.050876 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 4.72s 2026-02-28 00:51:24.050880 | orchestrator | openvswitch : Copying over config.json files for services --------------- 4.64s 2026-02-28 00:51:24.050884 | orchestrator | service-check-containers : openvswitch | Check containers --------------- 3.54s 2026-02-28 00:51:24.050889 | orchestrator | service-check-containers : openvswitch | Notify handlers to restart containers --- 3.43s 2026-02-28 00:51:24.050893 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 3.33s 2026-02-28 00:51:24.050897 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 3.19s 2026-02-28 00:51:24.050902 | orchestrator | openvswitch : include_tasks --------------------------------------------- 2.71s 2026-02-28 00:51:24.050906 | orchestrator | module-load : Drop module persistence ----------------------------------- 2.65s 2026-02-28 00:51:24.050910 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 2.57s 2026-02-28 00:51:24.050915 | orchestrator | module-load : Load modules ---------------------------------------------- 2.30s 2026-02-28 00:51:24.050923 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 1.87s 2026-02-28 00:51:24.050927 | orchestrator | service-check-containers : Include tasks -------------------------------- 1.81s 2026-02-28 00:51:24.050931 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 1.60s 2026-02-28 00:51:24.050936 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.54s 2026-02-28 00:51:24.050940 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 1.39s 2026-02-28 00:51:24.050944 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.35s 2026-02-28 00:51:24.050949 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 1.31s 2026-02-28 00:51:24.051037 | orchestrator | 2026-02-28 00:51:24 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:51:24.051043 | orchestrator | 2026-02-28 00:51:24 | INFO  | Task 9360a8a3-309e-413e-b1d7-52e10f7ee700 is in state STARTED 2026-02-28 00:51:24.051048 | orchestrator | 2026-02-28 00:51:24 | INFO  | Task 62b32f35-7e9c-46e5-ad32-af6d8892e7f1 is in state STARTED 2026-02-28 00:51:24.051052 | orchestrator | 2026-02-28 00:51:24 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:51:27.091635 | orchestrator | 2026-02-28 00:51:27 | INFO  | Task f2d296b6-0e33-4574-85a0-ce15e7bf129b is in state STARTED 2026-02-28 00:51:27.092536 | orchestrator | 2026-02-28 00:51:27 | INFO  | Task ba312156-fe42-4c28-ad1a-fa777ea78ebd is in state STARTED 2026-02-28 00:51:27.094169 | orchestrator | 2026-02-28 00:51:27 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:51:27.096007 | orchestrator | 2026-02-28 00:51:27 | INFO  | Task 9360a8a3-309e-413e-b1d7-52e10f7ee700 is in state STARTED 2026-02-28 00:51:27.097533 | orchestrator | 2026-02-28 00:51:27 | INFO  | Task 62b32f35-7e9c-46e5-ad32-af6d8892e7f1 is in state STARTED 2026-02-28 00:51:27.099248 | orchestrator | 2026-02-28 00:51:27 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:51:30.152590 | orchestrator | 2026-02-28 00:51:30 | INFO  | Task f2d296b6-0e33-4574-85a0-ce15e7bf129b is in state STARTED 2026-02-28 00:51:30.154815 | orchestrator | 2026-02-28 00:51:30 | INFO  | Task ba312156-fe42-4c28-ad1a-fa777ea78ebd is in state STARTED 2026-02-28 00:51:30.157375 | orchestrator | 2026-02-28 00:51:30 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:51:30.160617 | orchestrator | 2026-02-28 00:51:30 | INFO  | Task 9360a8a3-309e-413e-b1d7-52e10f7ee700 is in state STARTED 2026-02-28 00:51:30.162244 | orchestrator | 2026-02-28 00:51:30 | INFO  | Task 62b32f35-7e9c-46e5-ad32-af6d8892e7f1 is in state STARTED 2026-02-28 00:51:30.162365 | orchestrator | 2026-02-28 00:51:30 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:51:33.210805 | orchestrator | 2026-02-28 00:51:33 | INFO  | Task f2d296b6-0e33-4574-85a0-ce15e7bf129b is in state STARTED 2026-02-28 00:51:33.213132 | orchestrator | 2026-02-28 00:51:33 | INFO  | Task ba312156-fe42-4c28-ad1a-fa777ea78ebd is in state STARTED 2026-02-28 00:51:33.217147 | orchestrator | 2026-02-28 00:51:33 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:51:33.217295 | orchestrator | 2026-02-28 00:51:33 | INFO  | Task 9360a8a3-309e-413e-b1d7-52e10f7ee700 is in state STARTED 2026-02-28 00:51:33.220879 | orchestrator | 2026-02-28 00:51:33 | INFO  | Task 62b32f35-7e9c-46e5-ad32-af6d8892e7f1 is in state STARTED 2026-02-28 00:51:33.220941 | orchestrator | 2026-02-28 00:51:33 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:51:36.266404 | orchestrator | 2026-02-28 00:51:36 | INFO  | Task f2d296b6-0e33-4574-85a0-ce15e7bf129b is in state STARTED 2026-02-28 00:51:36.267243 | orchestrator | 2026-02-28 00:51:36 | INFO  | Task ba312156-fe42-4c28-ad1a-fa777ea78ebd is in state STARTED 2026-02-28 00:51:36.268740 | orchestrator | 2026-02-28 00:51:36 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:51:36.269787 | orchestrator | 2026-02-28 00:51:36 | INFO  | Task 9360a8a3-309e-413e-b1d7-52e10f7ee700 is in state STARTED 2026-02-28 00:51:36.271258 | orchestrator | 2026-02-28 00:51:36 | INFO  | Task 62b32f35-7e9c-46e5-ad32-af6d8892e7f1 is in state STARTED 2026-02-28 00:51:36.271327 | orchestrator | 2026-02-28 00:51:36 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:51:39.409631 | orchestrator | 2026-02-28 00:51:39 | INFO  | Task f2d296b6-0e33-4574-85a0-ce15e7bf129b is in state STARTED 2026-02-28 00:51:39.415489 | orchestrator | 2026-02-28 00:51:39 | INFO  | Task ba312156-fe42-4c28-ad1a-fa777ea78ebd is in state STARTED 2026-02-28 00:51:39.418692 | orchestrator | 2026-02-28 00:51:39 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:51:39.421748 | orchestrator | 2026-02-28 00:51:39 | INFO  | Task 9360a8a3-309e-413e-b1d7-52e10f7ee700 is in state STARTED 2026-02-28 00:51:39.425734 | orchestrator | 2026-02-28 00:51:39 | INFO  | Task 62b32f35-7e9c-46e5-ad32-af6d8892e7f1 is in state STARTED 2026-02-28 00:51:39.427511 | orchestrator | 2026-02-28 00:51:39 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:51:42.499846 | orchestrator | 2026-02-28 00:51:42 | INFO  | Task f2d296b6-0e33-4574-85a0-ce15e7bf129b is in state STARTED 2026-02-28 00:51:42.506453 | orchestrator | 2026-02-28 00:51:42 | INFO  | Task ba312156-fe42-4c28-ad1a-fa777ea78ebd is in state STARTED 2026-02-28 00:51:42.509213 | orchestrator | 2026-02-28 00:51:42 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:51:42.517991 | orchestrator | 2026-02-28 00:51:42 | INFO  | Task 9360a8a3-309e-413e-b1d7-52e10f7ee700 is in state STARTED 2026-02-28 00:51:42.519302 | orchestrator | 2026-02-28 00:51:42 | INFO  | Task 62b32f35-7e9c-46e5-ad32-af6d8892e7f1 is in state STARTED 2026-02-28 00:51:42.519343 | orchestrator | 2026-02-28 00:51:42 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:51:45.559349 | orchestrator | 2026-02-28 00:51:45 | INFO  | Task f2d296b6-0e33-4574-85a0-ce15e7bf129b is in state STARTED 2026-02-28 00:51:45.559545 | orchestrator | 2026-02-28 00:51:45 | INFO  | Task ba312156-fe42-4c28-ad1a-fa777ea78ebd is in state STARTED 2026-02-28 00:51:45.560917 | orchestrator | 2026-02-28 00:51:45 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:51:45.563349 | orchestrator | 2026-02-28 00:51:45 | INFO  | Task 9360a8a3-309e-413e-b1d7-52e10f7ee700 is in state STARTED 2026-02-28 00:51:45.568427 | orchestrator | 2026-02-28 00:51:45 | INFO  | Task 62b32f35-7e9c-46e5-ad32-af6d8892e7f1 is in state STARTED 2026-02-28 00:51:45.568523 | orchestrator | 2026-02-28 00:51:45 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:51:48.663845 | orchestrator | 2026-02-28 00:51:48 | INFO  | Task f2d296b6-0e33-4574-85a0-ce15e7bf129b is in state STARTED 2026-02-28 00:51:48.664760 | orchestrator | 2026-02-28 00:51:48 | INFO  | Task ba312156-fe42-4c28-ad1a-fa777ea78ebd is in state STARTED 2026-02-28 00:51:48.665527 | orchestrator | 2026-02-28 00:51:48 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:51:48.667269 | orchestrator | 2026-02-28 00:51:48 | INFO  | Task 9360a8a3-309e-413e-b1d7-52e10f7ee700 is in state STARTED 2026-02-28 00:51:48.667921 | orchestrator | 2026-02-28 00:51:48 | INFO  | Task 62b32f35-7e9c-46e5-ad32-af6d8892e7f1 is in state STARTED 2026-02-28 00:51:48.667964 | orchestrator | 2026-02-28 00:51:48 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:51:51.721397 | orchestrator | 2026-02-28 00:51:51 | INFO  | Task f2d296b6-0e33-4574-85a0-ce15e7bf129b is in state STARTED 2026-02-28 00:51:51.721497 | orchestrator | 2026-02-28 00:51:51 | INFO  | Task ba312156-fe42-4c28-ad1a-fa777ea78ebd is in state STARTED 2026-02-28 00:51:51.724527 | orchestrator | 2026-02-28 00:51:51 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:51:51.727493 | orchestrator | 2026-02-28 00:51:51 | INFO  | Task 9360a8a3-309e-413e-b1d7-52e10f7ee700 is in state STARTED 2026-02-28 00:51:51.731212 | orchestrator | 2026-02-28 00:51:51 | INFO  | Task 62b32f35-7e9c-46e5-ad32-af6d8892e7f1 is in state STARTED 2026-02-28 00:51:51.731338 | orchestrator | 2026-02-28 00:51:51 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:51:54.973894 | orchestrator | 2026-02-28 00:51:54 | INFO  | Task f2d296b6-0e33-4574-85a0-ce15e7bf129b is in state STARTED 2026-02-28 00:51:54.973997 | orchestrator | 2026-02-28 00:51:54 | INFO  | Task ba312156-fe42-4c28-ad1a-fa777ea78ebd is in state STARTED 2026-02-28 00:51:54.974011 | orchestrator | 2026-02-28 00:51:54 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:51:54.974087 | orchestrator | 2026-02-28 00:51:54 | INFO  | Task 9360a8a3-309e-413e-b1d7-52e10f7ee700 is in state STARTED 2026-02-28 00:51:54.974112 | orchestrator | 2026-02-28 00:51:54 | INFO  | Task 62b32f35-7e9c-46e5-ad32-af6d8892e7f1 is in state STARTED 2026-02-28 00:51:54.974136 | orchestrator | 2026-02-28 00:51:54 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:51:58.021297 | orchestrator | 2026-02-28 00:51:58 | INFO  | Task f2d296b6-0e33-4574-85a0-ce15e7bf129b is in state STARTED 2026-02-28 00:51:58.026694 | orchestrator | 2026-02-28 00:51:58 | INFO  | Task ba312156-fe42-4c28-ad1a-fa777ea78ebd is in state STARTED 2026-02-28 00:51:58.027096 | orchestrator | 2026-02-28 00:51:58 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:51:58.030965 | orchestrator | 2026-02-28 00:51:58 | INFO  | Task 9360a8a3-309e-413e-b1d7-52e10f7ee700 is in state STARTED 2026-02-28 00:51:58.031449 | orchestrator | 2026-02-28 00:51:58 | INFO  | Task 62b32f35-7e9c-46e5-ad32-af6d8892e7f1 is in state STARTED 2026-02-28 00:51:58.031486 | orchestrator | 2026-02-28 00:51:58 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:52:01.104614 | orchestrator | 2026-02-28 00:52:01 | INFO  | Task f2d296b6-0e33-4574-85a0-ce15e7bf129b is in state STARTED 2026-02-28 00:52:01.104973 | orchestrator | 2026-02-28 00:52:01 | INFO  | Task ba312156-fe42-4c28-ad1a-fa777ea78ebd is in state STARTED 2026-02-28 00:52:01.105968 | orchestrator | 2026-02-28 00:52:01 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:52:01.106572 | orchestrator | 2026-02-28 00:52:01 | INFO  | Task 9360a8a3-309e-413e-b1d7-52e10f7ee700 is in state STARTED 2026-02-28 00:52:01.107336 | orchestrator | 2026-02-28 00:52:01 | INFO  | Task 62b32f35-7e9c-46e5-ad32-af6d8892e7f1 is in state STARTED 2026-02-28 00:52:01.107361 | orchestrator | 2026-02-28 00:52:01 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:52:04.166470 | orchestrator | 2026-02-28 00:52:04 | INFO  | Task f2d296b6-0e33-4574-85a0-ce15e7bf129b is in state STARTED 2026-02-28 00:52:04.170489 | orchestrator | 2026-02-28 00:52:04 | INFO  | Task ba312156-fe42-4c28-ad1a-fa777ea78ebd is in state STARTED 2026-02-28 00:52:04.171466 | orchestrator | 2026-02-28 00:52:04 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:52:04.173002 | orchestrator | 2026-02-28 00:52:04 | INFO  | Task 9360a8a3-309e-413e-b1d7-52e10f7ee700 is in state STARTED 2026-02-28 00:52:04.173902 | orchestrator | 2026-02-28 00:52:04 | INFO  | Task 62b32f35-7e9c-46e5-ad32-af6d8892e7f1 is in state STARTED 2026-02-28 00:52:04.173939 | orchestrator | 2026-02-28 00:52:04 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:52:07.219111 | orchestrator | 2026-02-28 00:52:07 | INFO  | Task f2d296b6-0e33-4574-85a0-ce15e7bf129b is in state STARTED 2026-02-28 00:52:07.219501 | orchestrator | 2026-02-28 00:52:07 | INFO  | Task ba312156-fe42-4c28-ad1a-fa777ea78ebd is in state STARTED 2026-02-28 00:52:07.220208 | orchestrator | 2026-02-28 00:52:07 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:52:07.222807 | orchestrator | 2026-02-28 00:52:07 | INFO  | Task 9360a8a3-309e-413e-b1d7-52e10f7ee700 is in state STARTED 2026-02-28 00:52:07.224227 | orchestrator | 2026-02-28 00:52:07 | INFO  | Task 62b32f35-7e9c-46e5-ad32-af6d8892e7f1 is in state STARTED 2026-02-28 00:52:07.224278 | orchestrator | 2026-02-28 00:52:07 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:52:10.284751 | orchestrator | 2026-02-28 00:52:10 | INFO  | Task f2d296b6-0e33-4574-85a0-ce15e7bf129b is in state STARTED 2026-02-28 00:52:10.284836 | orchestrator | 2026-02-28 00:52:10 | INFO  | Task ba312156-fe42-4c28-ad1a-fa777ea78ebd is in state STARTED 2026-02-28 00:52:10.287712 | orchestrator | 2026-02-28 00:52:10 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:52:10.291143 | orchestrator | 2026-02-28 00:52:10 | INFO  | Task 9360a8a3-309e-413e-b1d7-52e10f7ee700 is in state STARTED 2026-02-28 00:52:10.295531 | orchestrator | 2026-02-28 00:52:10 | INFO  | Task 62b32f35-7e9c-46e5-ad32-af6d8892e7f1 is in state STARTED 2026-02-28 00:52:10.297728 | orchestrator | 2026-02-28 00:52:10 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:52:13.334695 | orchestrator | 2026-02-28 00:52:13 | INFO  | Task f8451b02-4056-44a0-8170-d4f200625151 is in state STARTED 2026-02-28 00:52:13.336114 | orchestrator | 2026-02-28 00:52:13 | INFO  | Task f2d296b6-0e33-4574-85a0-ce15e7bf129b is in state SUCCESS 2026-02-28 00:52:13.340328 | orchestrator | 2026-02-28 00:52:13.340376 | orchestrator | 2026-02-28 00:52:13.340384 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2026-02-28 00:52:13.340391 | orchestrator | 2026-02-28 00:52:13.340397 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2026-02-28 00:52:13.340403 | orchestrator | Saturday 28 February 2026 00:47:08 +0000 (0:00:00.215) 0:00:00.215 ***** 2026-02-28 00:52:13.340409 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:52:13.340416 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:52:13.340421 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:52:13.340427 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:52:13.340432 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:52:13.340437 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:52:13.340442 | orchestrator | 2026-02-28 00:52:13.340448 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2026-02-28 00:52:13.340453 | orchestrator | Saturday 28 February 2026 00:47:08 +0000 (0:00:00.773) 0:00:00.989 ***** 2026-02-28 00:52:13.340459 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:52:13.340465 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:52:13.340470 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:52:13.340475 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:52:13.340480 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:52:13.340485 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:52:13.340491 | orchestrator | 2026-02-28 00:52:13.340496 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2026-02-28 00:52:13.340501 | orchestrator | Saturday 28 February 2026 00:47:09 +0000 (0:00:00.762) 0:00:01.751 ***** 2026-02-28 00:52:13.340521 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:52:13.340526 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:52:13.340532 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:52:13.340537 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:52:13.340542 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:52:13.340547 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:52:13.340553 | orchestrator | 2026-02-28 00:52:13.340558 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2026-02-28 00:52:13.340563 | orchestrator | Saturday 28 February 2026 00:47:10 +0000 (0:00:00.671) 0:00:02.423 ***** 2026-02-28 00:52:13.340568 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:52:13.340573 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:52:13.340579 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:52:13.340584 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:52:13.340589 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:52:13.340594 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:52:13.340599 | orchestrator | 2026-02-28 00:52:13.340604 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2026-02-28 00:52:13.340609 | orchestrator | Saturday 28 February 2026 00:47:12 +0000 (0:00:02.486) 0:00:04.909 ***** 2026-02-28 00:52:13.340614 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:52:13.340620 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:52:13.340625 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:52:13.340630 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:52:13.340635 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:52:13.340640 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:52:13.340645 | orchestrator | 2026-02-28 00:52:13.340650 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2026-02-28 00:52:13.340655 | orchestrator | Saturday 28 February 2026 00:47:13 +0000 (0:00:01.108) 0:00:06.018 ***** 2026-02-28 00:52:13.340661 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:52:13.340666 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:52:13.340671 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:52:13.340676 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:52:13.340681 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:52:13.340686 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:52:13.340691 | orchestrator | 2026-02-28 00:52:13.340697 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2026-02-28 00:52:13.340702 | orchestrator | Saturday 28 February 2026 00:47:14 +0000 (0:00:00.894) 0:00:06.912 ***** 2026-02-28 00:52:13.340707 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:52:13.340713 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:52:13.340718 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:52:13.340723 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:52:13.340728 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:52:13.340733 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:52:13.340738 | orchestrator | 2026-02-28 00:52:13.340743 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2026-02-28 00:52:13.340749 | orchestrator | Saturday 28 February 2026 00:47:15 +0000 (0:00:00.681) 0:00:07.594 ***** 2026-02-28 00:52:13.340754 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:52:13.340759 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:52:13.340764 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:52:13.340769 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:52:13.340774 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:52:13.340779 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:52:13.340785 | orchestrator | 2026-02-28 00:52:13.340790 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2026-02-28 00:52:13.340795 | orchestrator | Saturday 28 February 2026 00:47:15 +0000 (0:00:00.536) 0:00:08.130 ***** 2026-02-28 00:52:13.340800 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-28 00:52:13.340809 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-28 00:52:13.340814 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:52:13.340819 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-28 00:52:13.340825 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-28 00:52:13.340830 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:52:13.340835 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-28 00:52:13.340840 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-28 00:52:13.340845 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:52:13.340851 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-28 00:52:13.340864 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-28 00:52:13.340870 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:52:13.340875 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-28 00:52:13.340880 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-28 00:52:13.340885 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:52:13.341371 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-28 00:52:13.341399 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-28 00:52:13.341405 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:52:13.341410 | orchestrator | 2026-02-28 00:52:13.341416 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2026-02-28 00:52:13.341422 | orchestrator | Saturday 28 February 2026 00:47:17 +0000 (0:00:01.494) 0:00:09.625 ***** 2026-02-28 00:52:13.341427 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:52:13.341432 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:52:13.341437 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:52:13.341442 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:52:13.341447 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:52:13.341452 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:52:13.341457 | orchestrator | 2026-02-28 00:52:13.341463 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2026-02-28 00:52:13.341469 | orchestrator | Saturday 28 February 2026 00:47:19 +0000 (0:00:01.906) 0:00:11.531 ***** 2026-02-28 00:52:13.341474 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:52:13.341480 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:52:13.341485 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:52:13.341490 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:52:13.341495 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:52:13.341500 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:52:13.341505 | orchestrator | 2026-02-28 00:52:13.341510 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2026-02-28 00:52:13.341515 | orchestrator | Saturday 28 February 2026 00:47:20 +0000 (0:00:01.551) 0:00:13.083 ***** 2026-02-28 00:52:13.341521 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:52:13.341526 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:52:13.341531 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:52:13.341536 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:52:13.341541 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:52:13.341546 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:52:13.341551 | orchestrator | 2026-02-28 00:52:13.341556 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2026-02-28 00:52:13.341561 | orchestrator | Saturday 28 February 2026 00:47:27 +0000 (0:00:06.123) 0:00:19.207 ***** 2026-02-28 00:52:13.341567 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:52:13.341572 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:52:13.341577 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:52:13.341582 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:52:13.341595 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:52:13.341600 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:52:13.341605 | orchestrator | 2026-02-28 00:52:13.341610 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2026-02-28 00:52:13.341615 | orchestrator | Saturday 28 February 2026 00:47:28 +0000 (0:00:01.491) 0:00:20.698 ***** 2026-02-28 00:52:13.341621 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:52:13.341626 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:52:13.341631 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:52:13.341636 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:52:13.341641 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:52:13.341646 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:52:13.341651 | orchestrator | 2026-02-28 00:52:13.341656 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2026-02-28 00:52:13.341662 | orchestrator | Saturday 28 February 2026 00:47:31 +0000 (0:00:02.653) 0:00:23.351 ***** 2026-02-28 00:52:13.341667 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:52:13.341672 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:52:13.341678 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:52:13.341683 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:52:13.341688 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:52:13.341693 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:52:13.341698 | orchestrator | 2026-02-28 00:52:13.341703 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2026-02-28 00:52:13.341708 | orchestrator | Saturday 28 February 2026 00:47:32 +0000 (0:00:01.137) 0:00:24.488 ***** 2026-02-28 00:52:13.341714 | orchestrator | skipping: [testbed-node-3] => (item=rancher)  2026-02-28 00:52:13.341719 | orchestrator | skipping: [testbed-node-3] => (item=rancher/k3s)  2026-02-28 00:52:13.341724 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:52:13.341729 | orchestrator | skipping: [testbed-node-4] => (item=rancher)  2026-02-28 00:52:13.341734 | orchestrator | skipping: [testbed-node-4] => (item=rancher/k3s)  2026-02-28 00:52:13.341739 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:52:13.341744 | orchestrator | skipping: [testbed-node-5] => (item=rancher)  2026-02-28 00:52:13.341749 | orchestrator | skipping: [testbed-node-5] => (item=rancher/k3s)  2026-02-28 00:52:13.341754 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:52:13.341759 | orchestrator | skipping: [testbed-node-0] => (item=rancher)  2026-02-28 00:52:13.341764 | orchestrator | skipping: [testbed-node-0] => (item=rancher/k3s)  2026-02-28 00:52:13.341770 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:52:13.341777 | orchestrator | skipping: [testbed-node-2] => (item=rancher)  2026-02-28 00:52:13.341783 | orchestrator | skipping: [testbed-node-2] => (item=rancher/k3s)  2026-02-28 00:52:13.341788 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:52:13.341793 | orchestrator | skipping: [testbed-node-1] => (item=rancher)  2026-02-28 00:52:13.341798 | orchestrator | skipping: [testbed-node-1] => (item=rancher/k3s)  2026-02-28 00:52:13.341803 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:52:13.341808 | orchestrator | 2026-02-28 00:52:13.341813 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2026-02-28 00:52:13.341827 | orchestrator | Saturday 28 February 2026 00:47:34 +0000 (0:00:02.145) 0:00:26.634 ***** 2026-02-28 00:52:13.341832 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:52:13.341837 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:52:13.341842 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:52:13.341847 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:52:13.341852 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:52:13.341858 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:52:13.341863 | orchestrator | 2026-02-28 00:52:13.341868 | orchestrator | TASK [k3s_custom_registries : Remove /etc/rancher/k3s/registries.yaml when no registries configured] *** 2026-02-28 00:52:13.341873 | orchestrator | Saturday 28 February 2026 00:47:35 +0000 (0:00:01.447) 0:00:28.081 ***** 2026-02-28 00:52:13.341882 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:52:13.341887 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:52:13.341892 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:52:13.341897 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:52:13.341902 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:52:13.341907 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:52:13.341912 | orchestrator | 2026-02-28 00:52:13.341917 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2026-02-28 00:52:13.341922 | orchestrator | 2026-02-28 00:52:13.341928 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2026-02-28 00:52:13.341933 | orchestrator | Saturday 28 February 2026 00:47:38 +0000 (0:00:02.527) 0:00:30.608 ***** 2026-02-28 00:52:13.341938 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:52:13.341943 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:52:13.341948 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:52:13.341953 | orchestrator | 2026-02-28 00:52:13.341958 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2026-02-28 00:52:13.341964 | orchestrator | Saturday 28 February 2026 00:47:41 +0000 (0:00:03.400) 0:00:34.009 ***** 2026-02-28 00:52:13.341969 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:52:13.341974 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:52:13.341979 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:52:13.341984 | orchestrator | 2026-02-28 00:52:13.341989 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2026-02-28 00:52:13.341994 | orchestrator | Saturday 28 February 2026 00:47:44 +0000 (0:00:02.849) 0:00:36.859 ***** 2026-02-28 00:52:13.341999 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:52:13.342004 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:52:13.342009 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:52:13.342065 | orchestrator | 2026-02-28 00:52:13.342073 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2026-02-28 00:52:13.342078 | orchestrator | Saturday 28 February 2026 00:47:46 +0000 (0:00:01.626) 0:00:38.485 ***** 2026-02-28 00:52:13.342083 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:52:13.342089 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:52:13.342094 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:52:13.342099 | orchestrator | 2026-02-28 00:52:13.342104 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2026-02-28 00:52:13.342109 | orchestrator | Saturday 28 February 2026 00:47:47 +0000 (0:00:01.116) 0:00:39.604 ***** 2026-02-28 00:52:13.342114 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:52:13.342119 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:52:13.342124 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:52:13.342129 | orchestrator | 2026-02-28 00:52:13.342134 | orchestrator | TASK [k3s_server : Create /etc/rancher/k3s directory] ************************** 2026-02-28 00:52:13.342139 | orchestrator | Saturday 28 February 2026 00:47:48 +0000 (0:00:01.211) 0:00:40.815 ***** 2026-02-28 00:52:13.342144 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:52:13.342150 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:52:13.342155 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:52:13.342160 | orchestrator | 2026-02-28 00:52:13.342165 | orchestrator | TASK [k3s_server : Create custom resolv.conf for k3s] ************************** 2026-02-28 00:52:13.342170 | orchestrator | Saturday 28 February 2026 00:47:50 +0000 (0:00:01.489) 0:00:42.304 ***** 2026-02-28 00:52:13.342175 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:52:13.342180 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:52:13.342185 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:52:13.342190 | orchestrator | 2026-02-28 00:52:13.342195 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2026-02-28 00:52:13.342200 | orchestrator | Saturday 28 February 2026 00:47:52 +0000 (0:00:02.409) 0:00:44.714 ***** 2026-02-28 00:52:13.342205 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:52:13.342210 | orchestrator | 2026-02-28 00:52:13.342220 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2026-02-28 00:52:13.342225 | orchestrator | Saturday 28 February 2026 00:47:53 +0000 (0:00:00.925) 0:00:45.640 ***** 2026-02-28 00:52:13.342230 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:52:13.342235 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:52:13.342240 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:52:13.342245 | orchestrator | 2026-02-28 00:52:13.342250 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2026-02-28 00:52:13.342256 | orchestrator | Saturday 28 February 2026 00:47:57 +0000 (0:00:04.455) 0:00:50.095 ***** 2026-02-28 00:52:13.342261 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:52:13.342266 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:52:13.342302 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:52:13.342311 | orchestrator | 2026-02-28 00:52:13.342319 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2026-02-28 00:52:13.342326 | orchestrator | Saturday 28 February 2026 00:47:59 +0000 (0:00:01.190) 0:00:51.285 ***** 2026-02-28 00:52:13.342336 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:52:13.342344 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:52:13.342350 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:52:13.342355 | orchestrator | 2026-02-28 00:52:13.342363 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2026-02-28 00:52:13.342371 | orchestrator | Saturday 28 February 2026 00:48:00 +0000 (0:00:01.253) 0:00:52.538 ***** 2026-02-28 00:52:13.342379 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:52:13.342388 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:52:13.342395 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:52:13.342403 | orchestrator | 2026-02-28 00:52:13.342411 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2026-02-28 00:52:13.342426 | orchestrator | Saturday 28 February 2026 00:48:02 +0000 (0:00:02.029) 0:00:54.569 ***** 2026-02-28 00:52:13.342434 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:52:13.342442 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:52:13.342449 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:52:13.342456 | orchestrator | 2026-02-28 00:52:13.342463 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2026-02-28 00:52:13.342471 | orchestrator | Saturday 28 February 2026 00:48:03 +0000 (0:00:01.117) 0:00:55.687 ***** 2026-02-28 00:52:13.342478 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:52:13.342485 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:52:13.342492 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:52:13.342500 | orchestrator | 2026-02-28 00:52:13.342507 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2026-02-28 00:52:13.342515 | orchestrator | Saturday 28 February 2026 00:48:03 +0000 (0:00:00.462) 0:00:56.150 ***** 2026-02-28 00:52:13.342522 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:52:13.342530 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:52:13.342537 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:52:13.342545 | orchestrator | 2026-02-28 00:52:13.342553 | orchestrator | TASK [k3s_server : Detect Kubernetes version for label compatibility] ********** 2026-02-28 00:52:13.342561 | orchestrator | Saturday 28 February 2026 00:48:06 +0000 (0:00:02.467) 0:00:58.617 ***** 2026-02-28 00:52:13.342569 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:52:13.342576 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:52:13.342583 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:52:13.342591 | orchestrator | 2026-02-28 00:52:13.342598 | orchestrator | TASK [k3s_server : Set node role label selector based on Kubernetes version] *** 2026-02-28 00:52:13.342606 | orchestrator | Saturday 28 February 2026 00:48:09 +0000 (0:00:03.265) 0:01:01.883 ***** 2026-02-28 00:52:13.342613 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:52:13.342621 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:52:13.342629 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:52:13.342636 | orchestrator | 2026-02-28 00:52:13.342643 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2026-02-28 00:52:13.342663 | orchestrator | Saturday 28 February 2026 00:48:10 +0000 (0:00:00.753) 0:01:02.637 ***** 2026-02-28 00:52:13.342672 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-02-28 00:52:13.342681 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-02-28 00:52:13.342690 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-02-28 00:52:13.342698 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-02-28 00:52:13.342707 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-02-28 00:52:13.342716 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-02-28 00:52:13.342724 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-02-28 00:52:13.342734 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-02-28 00:52:13.342740 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-02-28 00:52:13.342745 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-02-28 00:52:13.342751 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-02-28 00:52:13.342756 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-02-28 00:52:13.342761 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2026-02-28 00:52:13.342766 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2026-02-28 00:52:13.342772 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2026-02-28 00:52:13.342777 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:52:13.342782 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:52:13.342792 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:52:13.342797 | orchestrator | 2026-02-28 00:52:13.342803 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2026-02-28 00:52:13.342808 | orchestrator | Saturday 28 February 2026 00:49:04 +0000 (0:00:54.165) 0:01:56.802 ***** 2026-02-28 00:52:13.342813 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:52:13.342818 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:52:13.342823 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:52:13.342829 | orchestrator | 2026-02-28 00:52:13.342834 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2026-02-28 00:52:13.342845 | orchestrator | Saturday 28 February 2026 00:49:04 +0000 (0:00:00.340) 0:01:57.143 ***** 2026-02-28 00:52:13.342850 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:52:13.342856 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:52:13.342861 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:52:13.342866 | orchestrator | 2026-02-28 00:52:13.342871 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2026-02-28 00:52:13.342876 | orchestrator | Saturday 28 February 2026 00:49:06 +0000 (0:00:01.528) 0:01:58.672 ***** 2026-02-28 00:52:13.342882 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:52:13.342891 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:52:13.342897 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:52:13.342902 | orchestrator | 2026-02-28 00:52:13.342910 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2026-02-28 00:52:13.342919 | orchestrator | Saturday 28 February 2026 00:49:08 +0000 (0:00:02.171) 0:02:00.843 ***** 2026-02-28 00:52:13.342927 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:52:13.342935 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:52:13.342943 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:52:13.342951 | orchestrator | 2026-02-28 00:52:13.342958 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2026-02-28 00:52:13.342966 | orchestrator | Saturday 28 February 2026 00:49:34 +0000 (0:00:25.531) 0:02:26.375 ***** 2026-02-28 00:52:13.342975 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:52:13.342982 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:52:13.342989 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:52:13.342997 | orchestrator | 2026-02-28 00:52:13.343005 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2026-02-28 00:52:13.343013 | orchestrator | Saturday 28 February 2026 00:49:34 +0000 (0:00:00.656) 0:02:27.032 ***** 2026-02-28 00:52:13.343021 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:52:13.343029 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:52:13.343038 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:52:13.343047 | orchestrator | 2026-02-28 00:52:13.343055 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2026-02-28 00:52:13.343063 | orchestrator | Saturday 28 February 2026 00:49:35 +0000 (0:00:00.676) 0:02:27.708 ***** 2026-02-28 00:52:13.343070 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:52:13.343079 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:52:13.343087 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:52:13.343096 | orchestrator | 2026-02-28 00:52:13.343104 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2026-02-28 00:52:13.343113 | orchestrator | Saturday 28 February 2026 00:49:36 +0000 (0:00:00.575) 0:02:28.283 ***** 2026-02-28 00:52:13.343122 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:52:13.343131 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:52:13.343139 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:52:13.343147 | orchestrator | 2026-02-28 00:52:13.343156 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2026-02-28 00:52:13.343165 | orchestrator | Saturday 28 February 2026 00:49:36 +0000 (0:00:00.807) 0:02:29.091 ***** 2026-02-28 00:52:13.343174 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:52:13.343183 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:52:13.343191 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:52:13.343200 | orchestrator | 2026-02-28 00:52:13.343208 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2026-02-28 00:52:13.343217 | orchestrator | Saturday 28 February 2026 00:49:37 +0000 (0:00:00.276) 0:02:29.367 ***** 2026-02-28 00:52:13.343226 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:52:13.343234 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:52:13.343242 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:52:13.343251 | orchestrator | 2026-02-28 00:52:13.343258 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2026-02-28 00:52:13.343267 | orchestrator | Saturday 28 February 2026 00:49:37 +0000 (0:00:00.615) 0:02:29.982 ***** 2026-02-28 00:52:13.343300 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:52:13.343309 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:52:13.343317 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:52:13.343326 | orchestrator | 2026-02-28 00:52:13.343335 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2026-02-28 00:52:13.343344 | orchestrator | Saturday 28 February 2026 00:49:38 +0000 (0:00:00.656) 0:02:30.639 ***** 2026-02-28 00:52:13.343352 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:52:13.343362 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:52:13.343370 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:52:13.343386 | orchestrator | 2026-02-28 00:52:13.343396 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2026-02-28 00:52:13.343405 | orchestrator | Saturday 28 February 2026 00:49:39 +0000 (0:00:01.194) 0:02:31.834 ***** 2026-02-28 00:52:13.343413 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:52:13.343422 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:52:13.343431 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:52:13.343440 | orchestrator | 2026-02-28 00:52:13.343449 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2026-02-28 00:52:13.343457 | orchestrator | Saturday 28 February 2026 00:49:40 +0000 (0:00:00.895) 0:02:32.730 ***** 2026-02-28 00:52:13.343467 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:52:13.343475 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:52:13.343484 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:52:13.343492 | orchestrator | 2026-02-28 00:52:13.343501 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2026-02-28 00:52:13.343510 | orchestrator | Saturday 28 February 2026 00:49:40 +0000 (0:00:00.327) 0:02:33.057 ***** 2026-02-28 00:52:13.343519 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:52:13.343528 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:52:13.343537 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:52:13.343546 | orchestrator | 2026-02-28 00:52:13.343560 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2026-02-28 00:52:13.343573 | orchestrator | Saturday 28 February 2026 00:49:41 +0000 (0:00:00.326) 0:02:33.384 ***** 2026-02-28 00:52:13.343581 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:52:13.343590 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:52:13.343598 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:52:13.343607 | orchestrator | 2026-02-28 00:52:13.343616 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2026-02-28 00:52:13.343623 | orchestrator | Saturday 28 February 2026 00:49:42 +0000 (0:00:01.069) 0:02:34.453 ***** 2026-02-28 00:52:13.343631 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:52:13.343647 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:52:13.343655 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:52:13.343665 | orchestrator | 2026-02-28 00:52:13.343676 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2026-02-28 00:52:13.343687 | orchestrator | Saturday 28 February 2026 00:49:42 +0000 (0:00:00.675) 0:02:35.128 ***** 2026-02-28 00:52:13.343695 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-02-28 00:52:13.343704 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-02-28 00:52:13.343712 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-02-28 00:52:13.343719 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-02-28 00:52:13.343727 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-02-28 00:52:13.343739 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-02-28 00:52:13.343746 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-02-28 00:52:13.343755 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-02-28 00:52:13.343763 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-02-28 00:52:13.343772 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2026-02-28 00:52:13.343782 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-02-28 00:52:13.343790 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-02-28 00:52:13.343806 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2026-02-28 00:52:13.343816 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-02-28 00:52:13.343824 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-02-28 00:52:13.343832 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-02-28 00:52:13.343840 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-02-28 00:52:13.343848 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-02-28 00:52:13.343856 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-02-28 00:52:13.343865 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-02-28 00:52:13.343873 | orchestrator | 2026-02-28 00:52:13.343881 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2026-02-28 00:52:13.343889 | orchestrator | 2026-02-28 00:52:13.343954 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2026-02-28 00:52:13.343968 | orchestrator | Saturday 28 February 2026 00:49:46 +0000 (0:00:03.390) 0:02:38.519 ***** 2026-02-28 00:52:13.343977 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:52:13.343986 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:52:13.343996 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:52:13.344005 | orchestrator | 2026-02-28 00:52:13.344014 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2026-02-28 00:52:13.344023 | orchestrator | Saturday 28 February 2026 00:49:46 +0000 (0:00:00.577) 0:02:39.096 ***** 2026-02-28 00:52:13.344032 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:52:13.344042 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:52:13.344051 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:52:13.344061 | orchestrator | 2026-02-28 00:52:13.344070 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2026-02-28 00:52:13.344080 | orchestrator | Saturday 28 February 2026 00:49:47 +0000 (0:00:00.646) 0:02:39.743 ***** 2026-02-28 00:52:13.344089 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:52:13.344099 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:52:13.344108 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:52:13.344118 | orchestrator | 2026-02-28 00:52:13.344126 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2026-02-28 00:52:13.344136 | orchestrator | Saturday 28 February 2026 00:49:47 +0000 (0:00:00.368) 0:02:40.112 ***** 2026-02-28 00:52:13.344146 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-28 00:52:13.344156 | orchestrator | 2026-02-28 00:52:13.344166 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2026-02-28 00:52:13.344175 | orchestrator | Saturday 28 February 2026 00:49:48 +0000 (0:00:00.711) 0:02:40.823 ***** 2026-02-28 00:52:13.344183 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:52:13.344191 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:52:13.344206 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:52:13.344215 | orchestrator | 2026-02-28 00:52:13.344223 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2026-02-28 00:52:13.344232 | orchestrator | Saturday 28 February 2026 00:49:48 +0000 (0:00:00.332) 0:02:41.156 ***** 2026-02-28 00:52:13.344241 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:52:13.344249 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:52:13.344258 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:52:13.344266 | orchestrator | 2026-02-28 00:52:13.344291 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2026-02-28 00:52:13.344310 | orchestrator | Saturday 28 February 2026 00:49:49 +0000 (0:00:00.343) 0:02:41.500 ***** 2026-02-28 00:52:13.344319 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:52:13.344335 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:52:13.344343 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:52:13.344351 | orchestrator | 2026-02-28 00:52:13.344359 | orchestrator | TASK [k3s_agent : Create /etc/rancher/k3s directory] *************************** 2026-02-28 00:52:13.344368 | orchestrator | Saturday 28 February 2026 00:49:49 +0000 (0:00:00.332) 0:02:41.833 ***** 2026-02-28 00:52:13.344376 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:52:13.344385 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:52:13.344394 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:52:13.344402 | orchestrator | 2026-02-28 00:52:13.344410 | orchestrator | TASK [k3s_agent : Create custom resolv.conf for k3s] *************************** 2026-02-28 00:52:13.344419 | orchestrator | Saturday 28 February 2026 00:49:50 +0000 (0:00:00.880) 0:02:42.714 ***** 2026-02-28 00:52:13.344427 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:52:13.344435 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:52:13.344443 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:52:13.344451 | orchestrator | 2026-02-28 00:52:13.344459 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2026-02-28 00:52:13.344467 | orchestrator | Saturday 28 February 2026 00:49:51 +0000 (0:00:01.233) 0:02:43.948 ***** 2026-02-28 00:52:13.344475 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:52:13.344484 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:52:13.344492 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:52:13.344500 | orchestrator | 2026-02-28 00:52:13.344508 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2026-02-28 00:52:13.344516 | orchestrator | Saturday 28 February 2026 00:49:53 +0000 (0:00:01.309) 0:02:45.257 ***** 2026-02-28 00:52:13.344524 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:52:13.344531 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:52:13.344540 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:52:13.344549 | orchestrator | 2026-02-28 00:52:13.344557 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-02-28 00:52:13.344565 | orchestrator | 2026-02-28 00:52:13.344574 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-02-28 00:52:13.344581 | orchestrator | Saturday 28 February 2026 00:50:03 +0000 (0:00:10.850) 0:02:56.108 ***** 2026-02-28 00:52:13.344590 | orchestrator | ok: [testbed-manager] 2026-02-28 00:52:13.344598 | orchestrator | 2026-02-28 00:52:13.344606 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-02-28 00:52:13.344614 | orchestrator | Saturday 28 February 2026 00:50:04 +0000 (0:00:00.858) 0:02:56.967 ***** 2026-02-28 00:52:13.344623 | orchestrator | changed: [testbed-manager] 2026-02-28 00:52:13.344631 | orchestrator | 2026-02-28 00:52:13.344641 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-02-28 00:52:13.344649 | orchestrator | Saturday 28 February 2026 00:50:05 +0000 (0:00:00.426) 0:02:57.393 ***** 2026-02-28 00:52:13.344657 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-02-28 00:52:13.344665 | orchestrator | 2026-02-28 00:52:13.344673 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-02-28 00:52:13.344681 | orchestrator | Saturday 28 February 2026 00:50:05 +0000 (0:00:00.521) 0:02:57.915 ***** 2026-02-28 00:52:13.344689 | orchestrator | changed: [testbed-manager] 2026-02-28 00:52:13.344697 | orchestrator | 2026-02-28 00:52:13.344705 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-02-28 00:52:13.344714 | orchestrator | Saturday 28 February 2026 00:50:06 +0000 (0:00:00.942) 0:02:58.857 ***** 2026-02-28 00:52:13.344722 | orchestrator | changed: [testbed-manager] 2026-02-28 00:52:13.344731 | orchestrator | 2026-02-28 00:52:13.344739 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-02-28 00:52:13.344747 | orchestrator | Saturday 28 February 2026 00:50:07 +0000 (0:00:00.739) 0:02:59.597 ***** 2026-02-28 00:52:13.344756 | orchestrator | changed: [testbed-manager -> localhost] 2026-02-28 00:52:13.344764 | orchestrator | 2026-02-28 00:52:13.344775 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-02-28 00:52:13.344793 | orchestrator | Saturday 28 February 2026 00:50:09 +0000 (0:00:02.031) 0:03:01.629 ***** 2026-02-28 00:52:13.344802 | orchestrator | changed: [testbed-manager -> localhost] 2026-02-28 00:52:13.344811 | orchestrator | 2026-02-28 00:52:13.344819 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-02-28 00:52:13.344828 | orchestrator | Saturday 28 February 2026 00:50:10 +0000 (0:00:01.098) 0:03:02.727 ***** 2026-02-28 00:52:13.344836 | orchestrator | changed: [testbed-manager] 2026-02-28 00:52:13.344845 | orchestrator | 2026-02-28 00:52:13.344853 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-02-28 00:52:13.344862 | orchestrator | Saturday 28 February 2026 00:50:11 +0000 (0:00:00.886) 0:03:03.614 ***** 2026-02-28 00:52:13.344871 | orchestrator | changed: [testbed-manager] 2026-02-28 00:52:13.344879 | orchestrator | 2026-02-28 00:52:13.344887 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2026-02-28 00:52:13.344896 | orchestrator | 2026-02-28 00:52:13.344907 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2026-02-28 00:52:13.344916 | orchestrator | Saturday 28 February 2026 00:50:12 +0000 (0:00:00.647) 0:03:04.261 ***** 2026-02-28 00:52:13.344925 | orchestrator | ok: [testbed-manager] 2026-02-28 00:52:13.344935 | orchestrator | 2026-02-28 00:52:13.344945 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2026-02-28 00:52:13.344953 | orchestrator | Saturday 28 February 2026 00:50:12 +0000 (0:00:00.181) 0:03:04.443 ***** 2026-02-28 00:52:13.344967 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2026-02-28 00:52:13.344977 | orchestrator | 2026-02-28 00:52:13.344986 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2026-02-28 00:52:13.344996 | orchestrator | Saturday 28 February 2026 00:50:12 +0000 (0:00:00.380) 0:03:04.824 ***** 2026-02-28 00:52:13.345005 | orchestrator | ok: [testbed-manager] 2026-02-28 00:52:13.345014 | orchestrator | 2026-02-28 00:52:13.345023 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2026-02-28 00:52:13.345033 | orchestrator | Saturday 28 February 2026 00:50:13 +0000 (0:00:01.203) 0:03:06.028 ***** 2026-02-28 00:52:13.345052 | orchestrator | ok: [testbed-manager] 2026-02-28 00:52:13.345061 | orchestrator | 2026-02-28 00:52:13.345069 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2026-02-28 00:52:13.345077 | orchestrator | Saturday 28 February 2026 00:50:16 +0000 (0:00:02.282) 0:03:08.311 ***** 2026-02-28 00:52:13.345088 | orchestrator | changed: [testbed-manager] 2026-02-28 00:52:13.345101 | orchestrator | 2026-02-28 00:52:13.345109 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2026-02-28 00:52:13.345117 | orchestrator | Saturday 28 February 2026 00:50:17 +0000 (0:00:01.002) 0:03:09.313 ***** 2026-02-28 00:52:13.345126 | orchestrator | ok: [testbed-manager] 2026-02-28 00:52:13.345134 | orchestrator | 2026-02-28 00:52:13.345142 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2026-02-28 00:52:13.345150 | orchestrator | Saturday 28 February 2026 00:50:17 +0000 (0:00:00.656) 0:03:09.969 ***** 2026-02-28 00:52:13.345159 | orchestrator | changed: [testbed-manager] 2026-02-28 00:52:13.345166 | orchestrator | 2026-02-28 00:52:13.345174 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2026-02-28 00:52:13.345182 | orchestrator | Saturday 28 February 2026 00:50:26 +0000 (0:00:09.041) 0:03:19.011 ***** 2026-02-28 00:52:13.345191 | orchestrator | changed: [testbed-manager] 2026-02-28 00:52:13.345199 | orchestrator | 2026-02-28 00:52:13.345208 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2026-02-28 00:52:13.345216 | orchestrator | Saturday 28 February 2026 00:50:41 +0000 (0:00:14.220) 0:03:33.232 ***** 2026-02-28 00:52:13.345224 | orchestrator | ok: [testbed-manager] 2026-02-28 00:52:13.345233 | orchestrator | 2026-02-28 00:52:13.345242 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2026-02-28 00:52:13.345250 | orchestrator | 2026-02-28 00:52:13.345259 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2026-02-28 00:52:13.345302 | orchestrator | Saturday 28 February 2026 00:50:41 +0000 (0:00:00.526) 0:03:33.759 ***** 2026-02-28 00:52:13.345313 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:52:13.345323 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:52:13.345332 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:52:13.345341 | orchestrator | 2026-02-28 00:52:13.345349 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2026-02-28 00:52:13.345358 | orchestrator | Saturday 28 February 2026 00:50:41 +0000 (0:00:00.412) 0:03:34.172 ***** 2026-02-28 00:52:13.345368 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:52:13.345377 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:52:13.345386 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:52:13.345395 | orchestrator | 2026-02-28 00:52:13.345404 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2026-02-28 00:52:13.345414 | orchestrator | Saturday 28 February 2026 00:50:42 +0000 (0:00:00.731) 0:03:34.903 ***** 2026-02-28 00:52:13.345423 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:52:13.345431 | orchestrator | 2026-02-28 00:52:13.345440 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2026-02-28 00:52:13.345448 | orchestrator | Saturday 28 February 2026 00:50:43 +0000 (0:00:00.851) 0:03:35.755 ***** 2026-02-28 00:52:13.345456 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-02-28 00:52:13.345466 | orchestrator | 2026-02-28 00:52:13.345475 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2026-02-28 00:52:13.345484 | orchestrator | Saturday 28 February 2026 00:50:44 +0000 (0:00:01.211) 0:03:36.966 ***** 2026-02-28 00:52:13.345493 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-28 00:52:13.345502 | orchestrator | 2026-02-28 00:52:13.345510 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2026-02-28 00:52:13.345518 | orchestrator | Saturday 28 February 2026 00:50:45 +0000 (0:00:01.199) 0:03:38.166 ***** 2026-02-28 00:52:13.345527 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:52:13.345536 | orchestrator | 2026-02-28 00:52:13.345544 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2026-02-28 00:52:13.345553 | orchestrator | Saturday 28 February 2026 00:50:46 +0000 (0:00:00.139) 0:03:38.306 ***** 2026-02-28 00:52:13.345562 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-28 00:52:13.345570 | orchestrator | 2026-02-28 00:52:13.345579 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2026-02-28 00:52:13.345587 | orchestrator | Saturday 28 February 2026 00:50:47 +0000 (0:00:01.178) 0:03:39.485 ***** 2026-02-28 00:52:13.345597 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:52:13.345606 | orchestrator | 2026-02-28 00:52:13.345615 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2026-02-28 00:52:13.345625 | orchestrator | Saturday 28 February 2026 00:50:47 +0000 (0:00:00.171) 0:03:39.656 ***** 2026-02-28 00:52:13.345634 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:52:13.345643 | orchestrator | 2026-02-28 00:52:13.345652 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2026-02-28 00:52:13.345661 | orchestrator | Saturday 28 February 2026 00:50:47 +0000 (0:00:00.142) 0:03:39.799 ***** 2026-02-28 00:52:13.345670 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:52:13.345679 | orchestrator | 2026-02-28 00:52:13.345688 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2026-02-28 00:52:13.345697 | orchestrator | Saturday 28 February 2026 00:50:47 +0000 (0:00:00.146) 0:03:39.945 ***** 2026-02-28 00:52:13.345706 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:52:13.345715 | orchestrator | 2026-02-28 00:52:13.345724 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2026-02-28 00:52:13.345740 | orchestrator | Saturday 28 February 2026 00:50:47 +0000 (0:00:00.119) 0:03:40.064 ***** 2026-02-28 00:52:13.345750 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-02-28 00:52:13.345767 | orchestrator | 2026-02-28 00:52:13.345776 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2026-02-28 00:52:13.345785 | orchestrator | Saturday 28 February 2026 00:50:53 +0000 (0:00:05.852) 0:03:45.916 ***** 2026-02-28 00:52:13.345794 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/cilium-operator) 2026-02-28 00:52:13.345813 | orchestrator | FAILED - RETRYING: [testbed-node-0 -> localhost]: Wait for Cilium resources (30 retries left). 2026-02-28 00:52:13.345824 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=daemonset/cilium) 2026-02-28 00:52:13.345833 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-relay) 2026-02-28 00:52:13.345841 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-ui) 2026-02-28 00:52:13.345850 | orchestrator | 2026-02-28 00:52:13.345859 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2026-02-28 00:52:13.345865 | orchestrator | Saturday 28 February 2026 00:51:37 +0000 (0:00:44.172) 0:04:30.089 ***** 2026-02-28 00:52:13.345871 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-28 00:52:13.345876 | orchestrator | 2026-02-28 00:52:13.345881 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2026-02-28 00:52:13.345886 | orchestrator | Saturday 28 February 2026 00:51:39 +0000 (0:00:01.530) 0:04:31.619 ***** 2026-02-28 00:52:13.345891 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-02-28 00:52:13.345897 | orchestrator | 2026-02-28 00:52:13.345902 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2026-02-28 00:52:13.345907 | orchestrator | Saturday 28 February 2026 00:51:41 +0000 (0:00:01.839) 0:04:33.459 ***** 2026-02-28 00:52:13.345912 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-02-28 00:52:13.345917 | orchestrator | 2026-02-28 00:52:13.345922 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2026-02-28 00:52:13.345928 | orchestrator | Saturday 28 February 2026 00:51:42 +0000 (0:00:01.279) 0:04:34.738 ***** 2026-02-28 00:52:13.345933 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:52:13.345938 | orchestrator | 2026-02-28 00:52:13.345943 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2026-02-28 00:52:13.345948 | orchestrator | Saturday 28 February 2026 00:51:42 +0000 (0:00:00.149) 0:04:34.888 ***** 2026-02-28 00:52:13.345953 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io) 2026-02-28 00:52:13.345959 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io) 2026-02-28 00:52:13.345964 | orchestrator | 2026-02-28 00:52:13.345969 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2026-02-28 00:52:13.345974 | orchestrator | Saturday 28 February 2026 00:51:44 +0000 (0:00:02.229) 0:04:37.118 ***** 2026-02-28 00:52:13.345979 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:52:13.345985 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:52:13.345990 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:52:13.345995 | orchestrator | 2026-02-28 00:52:13.346000 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2026-02-28 00:52:13.346005 | orchestrator | Saturday 28 February 2026 00:51:45 +0000 (0:00:00.440) 0:04:37.558 ***** 2026-02-28 00:52:13.346010 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:52:13.346040 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:52:13.346047 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:52:13.346052 | orchestrator | 2026-02-28 00:52:13.346058 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2026-02-28 00:52:13.346063 | orchestrator | 2026-02-28 00:52:13.346068 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2026-02-28 00:52:13.346073 | orchestrator | Saturday 28 February 2026 00:51:46 +0000 (0:00:01.242) 0:04:38.800 ***** 2026-02-28 00:52:13.346078 | orchestrator | ok: [testbed-manager] 2026-02-28 00:52:13.346084 | orchestrator | 2026-02-28 00:52:13.346089 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2026-02-28 00:52:13.346098 | orchestrator | Saturday 28 February 2026 00:51:46 +0000 (0:00:00.262) 0:04:39.062 ***** 2026-02-28 00:52:13.346104 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2026-02-28 00:52:13.346109 | orchestrator | 2026-02-28 00:52:13.346114 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2026-02-28 00:52:13.346119 | orchestrator | Saturday 28 February 2026 00:51:47 +0000 (0:00:00.284) 0:04:39.346 ***** 2026-02-28 00:52:13.346124 | orchestrator | changed: [testbed-manager] 2026-02-28 00:52:13.346129 | orchestrator | 2026-02-28 00:52:13.346134 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2026-02-28 00:52:13.346139 | orchestrator | 2026-02-28 00:52:13.346144 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2026-02-28 00:52:13.346149 | orchestrator | Saturday 28 February 2026 00:51:52 +0000 (0:00:05.567) 0:04:44.914 ***** 2026-02-28 00:52:13.346155 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:52:13.346160 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:52:13.346165 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:52:13.346170 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:52:13.346175 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:52:13.346180 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:52:13.346185 | orchestrator | 2026-02-28 00:52:13.346190 | orchestrator | TASK [Manage labels] *********************************************************** 2026-02-28 00:52:13.346196 | orchestrator | Saturday 28 February 2026 00:51:53 +0000 (0:00:01.149) 0:04:46.063 ***** 2026-02-28 00:52:13.346201 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-02-28 00:52:13.346206 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-02-28 00:52:13.346212 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-02-28 00:52:13.346220 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-02-28 00:52:13.346226 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-02-28 00:52:13.346231 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-02-28 00:52:13.346236 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-02-28 00:52:13.346241 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-02-28 00:52:13.346251 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-02-28 00:52:13.346256 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2026-02-28 00:52:13.346261 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-02-28 00:52:13.346266 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2026-02-28 00:52:13.346316 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-02-28 00:52:13.346321 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2026-02-28 00:52:13.346326 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-02-28 00:52:13.346331 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-02-28 00:52:13.346336 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-02-28 00:52:13.346342 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-02-28 00:52:13.346347 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-02-28 00:52:13.346352 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-02-28 00:52:13.346357 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-02-28 00:52:13.346372 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-02-28 00:52:13.346377 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-02-28 00:52:13.346382 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-02-28 00:52:13.346387 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-02-28 00:52:13.346392 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-02-28 00:52:13.346398 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-02-28 00:52:13.346403 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-02-28 00:52:13.346408 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-02-28 00:52:13.346413 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-02-28 00:52:13.346418 | orchestrator | 2026-02-28 00:52:13.346423 | orchestrator | TASK [Manage annotations] ****************************************************** 2026-02-28 00:52:13.346429 | orchestrator | Saturday 28 February 2026 00:52:08 +0000 (0:00:15.024) 0:05:01.087 ***** 2026-02-28 00:52:13.346434 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:52:13.346439 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:52:13.346444 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:52:13.346449 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:52:13.346455 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:52:13.346460 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:52:13.346465 | orchestrator | 2026-02-28 00:52:13.346471 | orchestrator | TASK [Manage taints] *********************************************************** 2026-02-28 00:52:13.346476 | orchestrator | Saturday 28 February 2026 00:52:09 +0000 (0:00:00.900) 0:05:01.988 ***** 2026-02-28 00:52:13.346481 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:52:13.346487 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:52:13.346492 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:52:13.346497 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:52:13.346502 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:52:13.346508 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:52:13.346517 | orchestrator | 2026-02-28 00:52:13.346526 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-28 00:52:13.346535 | orchestrator | testbed-manager : ok=21  changed=11  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-28 00:52:13.346546 | orchestrator | testbed-node-0 : ok=50  changed=23  unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-02-28 00:52:13.346554 | orchestrator | testbed-node-1 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-02-28 00:52:13.346562 | orchestrator | testbed-node-2 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-02-28 00:52:13.346571 | orchestrator | testbed-node-3 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-02-28 00:52:13.346584 | orchestrator | testbed-node-4 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-02-28 00:52:13.346594 | orchestrator | testbed-node-5 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-02-28 00:52:13.346602 | orchestrator | 2026-02-28 00:52:13.346610 | orchestrator | 2026-02-28 00:52:13.346620 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-28 00:52:13.346630 | orchestrator | Saturday 28 February 2026 00:52:10 +0000 (0:00:00.486) 0:05:02.475 ***** 2026-02-28 00:52:13.346641 | orchestrator | =============================================================================== 2026-02-28 00:52:13.346651 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 54.17s 2026-02-28 00:52:13.346660 | orchestrator | k3s_server_post : Wait for Cilium resources ---------------------------- 44.17s 2026-02-28 00:52:13.346668 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 25.53s 2026-02-28 00:52:13.346677 | orchestrator | Manage labels ---------------------------------------------------------- 15.02s 2026-02-28 00:52:13.346686 | orchestrator | kubectl : Install required packages ------------------------------------ 14.22s 2026-02-28 00:52:13.346696 | orchestrator | k3s_agent : Manage k3s service ----------------------------------------- 10.85s 2026-02-28 00:52:13.346704 | orchestrator | kubectl : Add repository Debian ----------------------------------------- 9.04s 2026-02-28 00:52:13.346713 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 6.12s 2026-02-28 00:52:13.346721 | orchestrator | k3s_server_post : Install Cilium ---------------------------------------- 5.85s 2026-02-28 00:52:13.346730 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 5.57s 2026-02-28 00:52:13.346737 | orchestrator | k3s_server : Set _kube_vip_bgp_peers fact ------------------------------- 4.46s 2026-02-28 00:52:13.346743 | orchestrator | k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers --- 3.40s 2026-02-28 00:52:13.346748 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 3.39s 2026-02-28 00:52:13.346753 | orchestrator | k3s_server : Detect Kubernetes version for label compatibility ---------- 3.27s 2026-02-28 00:52:13.346758 | orchestrator | k3s_server : Stop k3s-init ---------------------------------------------- 2.85s 2026-02-28 00:52:13.346763 | orchestrator | k3s_download : Download k3s binary armhf -------------------------------- 2.65s 2026-02-28 00:52:13.346768 | orchestrator | k3s_custom_registries : Remove /etc/rancher/k3s/registries.yaml when no registries configured --- 2.53s 2026-02-28 00:52:13.346773 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 2.49s 2026-02-28 00:52:13.346777 | orchestrator | k3s_server : Init cluster inside the transient k3s-init service --------- 2.47s 2026-02-28 00:52:13.346782 | orchestrator | k3s_server : Create custom resolv.conf for k3s -------------------------- 2.41s 2026-02-28 00:52:13.346787 | orchestrator | 2026-02-28 00:52:13 | INFO  | Task ba312156-fe42-4c28-ad1a-fa777ea78ebd is in state STARTED 2026-02-28 00:52:13.346872 | orchestrator | 2026-02-28 00:52:13 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:52:13.347140 | orchestrator | 2026-02-28 00:52:13 | INFO  | Task 9360a8a3-309e-413e-b1d7-52e10f7ee700 is in state STARTED 2026-02-28 00:52:13.347157 | orchestrator | 2026-02-28 00:52:13 | INFO  | Task 62b32f35-7e9c-46e5-ad32-af6d8892e7f1 is in state STARTED 2026-02-28 00:52:13.348385 | orchestrator | 2026-02-28 00:52:13 | INFO  | Task 59ae1e76-f12c-4bb0-aa5a-0c157fb81d9c is in state STARTED 2026-02-28 00:52:13.348515 | orchestrator | 2026-02-28 00:52:13 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:52:16.403565 | orchestrator | 2026-02-28 00:52:16 | INFO  | Task f8451b02-4056-44a0-8170-d4f200625151 is in state STARTED 2026-02-28 00:52:16.410742 | orchestrator | 2026-02-28 00:52:16 | INFO  | Task ba312156-fe42-4c28-ad1a-fa777ea78ebd is in state STARTED 2026-02-28 00:52:16.410854 | orchestrator | 2026-02-28 00:52:16 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:52:16.411824 | orchestrator | 2026-02-28 00:52:16 | INFO  | Task 9360a8a3-309e-413e-b1d7-52e10f7ee700 is in state STARTED 2026-02-28 00:52:16.412523 | orchestrator | 2026-02-28 00:52:16 | INFO  | Task 62b32f35-7e9c-46e5-ad32-af6d8892e7f1 is in state STARTED 2026-02-28 00:52:16.413662 | orchestrator | 2026-02-28 00:52:16 | INFO  | Task 59ae1e76-f12c-4bb0-aa5a-0c157fb81d9c is in state STARTED 2026-02-28 00:52:16.413735 | orchestrator | 2026-02-28 00:52:16 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:52:19.480609 | orchestrator | 2026-02-28 00:52:19 | INFO  | Task f8451b02-4056-44a0-8170-d4f200625151 is in state STARTED 2026-02-28 00:52:19.480711 | orchestrator | 2026-02-28 00:52:19 | INFO  | Task ba312156-fe42-4c28-ad1a-fa777ea78ebd is in state STARTED 2026-02-28 00:52:19.480749 | orchestrator | 2026-02-28 00:52:19 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:52:19.480762 | orchestrator | 2026-02-28 00:52:19 | INFO  | Task 9360a8a3-309e-413e-b1d7-52e10f7ee700 is in state STARTED 2026-02-28 00:52:19.480773 | orchestrator | 2026-02-28 00:52:19 | INFO  | Task 62b32f35-7e9c-46e5-ad32-af6d8892e7f1 is in state STARTED 2026-02-28 00:52:19.480783 | orchestrator | 2026-02-28 00:52:19 | INFO  | Task 59ae1e76-f12c-4bb0-aa5a-0c157fb81d9c is in state STARTED 2026-02-28 00:52:19.480793 | orchestrator | 2026-02-28 00:52:19 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:52:22.528044 | orchestrator | 2026-02-28 00:52:22 | INFO  | Task f8451b02-4056-44a0-8170-d4f200625151 is in state SUCCESS 2026-02-28 00:52:22.529549 | orchestrator | 2026-02-28 00:52:22 | INFO  | Task ba312156-fe42-4c28-ad1a-fa777ea78ebd is in state STARTED 2026-02-28 00:52:22.531506 | orchestrator | 2026-02-28 00:52:22 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:52:22.533260 | orchestrator | 2026-02-28 00:52:22 | INFO  | Task 9360a8a3-309e-413e-b1d7-52e10f7ee700 is in state STARTED 2026-02-28 00:52:22.534842 | orchestrator | 2026-02-28 00:52:22 | INFO  | Task 62b32f35-7e9c-46e5-ad32-af6d8892e7f1 is in state STARTED 2026-02-28 00:52:22.536100 | orchestrator | 2026-02-28 00:52:22 | INFO  | Task 59ae1e76-f12c-4bb0-aa5a-0c157fb81d9c is in state STARTED 2026-02-28 00:52:22.536137 | orchestrator | 2026-02-28 00:52:22 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:52:25.573439 | orchestrator | 2026-02-28 00:52:25 | INFO  | Task ba312156-fe42-4c28-ad1a-fa777ea78ebd is in state STARTED 2026-02-28 00:52:25.576236 | orchestrator | 2026-02-28 00:52:25 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:52:25.578218 | orchestrator | 2026-02-28 00:52:25 | INFO  | Task 9360a8a3-309e-413e-b1d7-52e10f7ee700 is in state STARTED 2026-02-28 00:52:25.580695 | orchestrator | 2026-02-28 00:52:25 | INFO  | Task 62b32f35-7e9c-46e5-ad32-af6d8892e7f1 is in state STARTED 2026-02-28 00:52:25.581609 | orchestrator | 2026-02-28 00:52:25 | INFO  | Task 59ae1e76-f12c-4bb0-aa5a-0c157fb81d9c is in state SUCCESS 2026-02-28 00:52:25.581810 | orchestrator | 2026-02-28 00:52:25 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:52:28.627548 | orchestrator | 2026-02-28 00:52:28 | INFO  | Task ba312156-fe42-4c28-ad1a-fa777ea78ebd is in state STARTED 2026-02-28 00:52:28.628732 | orchestrator | 2026-02-28 00:52:28 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:52:28.630173 | orchestrator | 2026-02-28 00:52:28 | INFO  | Task 9360a8a3-309e-413e-b1d7-52e10f7ee700 is in state STARTED 2026-02-28 00:52:28.631548 | orchestrator | 2026-02-28 00:52:28 | INFO  | Task 62b32f35-7e9c-46e5-ad32-af6d8892e7f1 is in state STARTED 2026-02-28 00:52:28.631632 | orchestrator | 2026-02-28 00:52:28 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:52:31.682450 | orchestrator | 2026-02-28 00:52:31 | INFO  | Task ba312156-fe42-4c28-ad1a-fa777ea78ebd is in state STARTED 2026-02-28 00:52:31.684069 | orchestrator | 2026-02-28 00:52:31 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:52:31.686199 | orchestrator | 2026-02-28 00:52:31 | INFO  | Task 9360a8a3-309e-413e-b1d7-52e10f7ee700 is in state STARTED 2026-02-28 00:52:31.688121 | orchestrator | 2026-02-28 00:52:31 | INFO  | Task 62b32f35-7e9c-46e5-ad32-af6d8892e7f1 is in state STARTED 2026-02-28 00:52:31.688180 | orchestrator | 2026-02-28 00:52:31 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:52:34.719967 | orchestrator | 2026-02-28 00:52:34 | INFO  | Task ba312156-fe42-4c28-ad1a-fa777ea78ebd is in state STARTED 2026-02-28 00:52:34.721564 | orchestrator | 2026-02-28 00:52:34 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:52:34.722516 | orchestrator | 2026-02-28 00:52:34 | INFO  | Task 9360a8a3-309e-413e-b1d7-52e10f7ee700 is in state STARTED 2026-02-28 00:52:34.724322 | orchestrator | 2026-02-28 00:52:34 | INFO  | Task 62b32f35-7e9c-46e5-ad32-af6d8892e7f1 is in state STARTED 2026-02-28 00:52:34.724430 | orchestrator | 2026-02-28 00:52:34 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:52:37.773376 | orchestrator | 2026-02-28 00:52:37 | INFO  | Task ba312156-fe42-4c28-ad1a-fa777ea78ebd is in state STARTED 2026-02-28 00:52:37.773895 | orchestrator | 2026-02-28 00:52:37 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:52:37.776936 | orchestrator | 2026-02-28 00:52:37 | INFO  | Task 9360a8a3-309e-413e-b1d7-52e10f7ee700 is in state STARTED 2026-02-28 00:52:37.777669 | orchestrator | 2026-02-28 00:52:37 | INFO  | Task 62b32f35-7e9c-46e5-ad32-af6d8892e7f1 is in state STARTED 2026-02-28 00:52:37.777719 | orchestrator | 2026-02-28 00:52:37 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:52:40.815633 | orchestrator | 2026-02-28 00:52:40 | INFO  | Task ba312156-fe42-4c28-ad1a-fa777ea78ebd is in state STARTED 2026-02-28 00:52:40.817358 | orchestrator | 2026-02-28 00:52:40 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:52:40.818271 | orchestrator | 2026-02-28 00:52:40 | INFO  | Task 9360a8a3-309e-413e-b1d7-52e10f7ee700 is in state STARTED 2026-02-28 00:52:40.819279 | orchestrator | 2026-02-28 00:52:40 | INFO  | Task 62b32f35-7e9c-46e5-ad32-af6d8892e7f1 is in state STARTED 2026-02-28 00:52:40.819400 | orchestrator | 2026-02-28 00:52:40 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:52:43.861419 | orchestrator | 2026-02-28 00:52:43 | INFO  | Task ba312156-fe42-4c28-ad1a-fa777ea78ebd is in state STARTED 2026-02-28 00:52:43.862855 | orchestrator | 2026-02-28 00:52:43 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:52:43.864678 | orchestrator | 2026-02-28 00:52:43 | INFO  | Task 9360a8a3-309e-413e-b1d7-52e10f7ee700 is in state STARTED 2026-02-28 00:52:43.865760 | orchestrator | 2026-02-28 00:52:43 | INFO  | Task 62b32f35-7e9c-46e5-ad32-af6d8892e7f1 is in state STARTED 2026-02-28 00:52:43.865946 | orchestrator | 2026-02-28 00:52:43 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:52:46.895736 | orchestrator | 2026-02-28 00:52:46 | INFO  | Task ba312156-fe42-4c28-ad1a-fa777ea78ebd is in state STARTED 2026-02-28 00:52:46.898592 | orchestrator | 2026-02-28 00:52:46 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:52:46.900158 | orchestrator | 2026-02-28 00:52:46 | INFO  | Task 9360a8a3-309e-413e-b1d7-52e10f7ee700 is in state STARTED 2026-02-28 00:52:46.903847 | orchestrator | 2026-02-28 00:52:46 | INFO  | Task 62b32f35-7e9c-46e5-ad32-af6d8892e7f1 is in state STARTED 2026-02-28 00:52:46.903920 | orchestrator | 2026-02-28 00:52:46 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:52:49.947887 | orchestrator | 2026-02-28 00:52:49 | INFO  | Task ba312156-fe42-4c28-ad1a-fa777ea78ebd is in state STARTED 2026-02-28 00:52:49.949038 | orchestrator | 2026-02-28 00:52:49 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:52:49.951404 | orchestrator | 2026-02-28 00:52:49 | INFO  | Task 9360a8a3-309e-413e-b1d7-52e10f7ee700 is in state STARTED 2026-02-28 00:52:49.952920 | orchestrator | 2026-02-28 00:52:49 | INFO  | Task 62b32f35-7e9c-46e5-ad32-af6d8892e7f1 is in state STARTED 2026-02-28 00:52:49.953150 | orchestrator | 2026-02-28 00:52:49 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:52:52.996521 | orchestrator | 2026-02-28 00:52:52 | INFO  | Task ba312156-fe42-4c28-ad1a-fa777ea78ebd is in state STARTED 2026-02-28 00:52:52.997558 | orchestrator | 2026-02-28 00:52:52 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:52:52.999280 | orchestrator | 2026-02-28 00:52:52 | INFO  | Task 9360a8a3-309e-413e-b1d7-52e10f7ee700 is in state STARTED 2026-02-28 00:52:53.001029 | orchestrator | 2026-02-28 00:52:53 | INFO  | Task 62b32f35-7e9c-46e5-ad32-af6d8892e7f1 is in state STARTED 2026-02-28 00:52:53.001457 | orchestrator | 2026-02-28 00:52:53 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:52:56.063251 | orchestrator | 2026-02-28 00:52:56 | INFO  | Task ba312156-fe42-4c28-ad1a-fa777ea78ebd is in state STARTED 2026-02-28 00:52:56.066711 | orchestrator | 2026-02-28 00:52:56 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:52:56.067209 | orchestrator | 2026-02-28 00:52:56 | INFO  | Task 9360a8a3-309e-413e-b1d7-52e10f7ee700 is in state STARTED 2026-02-28 00:52:56.068286 | orchestrator | 2026-02-28 00:52:56 | INFO  | Task 62b32f35-7e9c-46e5-ad32-af6d8892e7f1 is in state STARTED 2026-02-28 00:52:56.068456 | orchestrator | 2026-02-28 00:52:56 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:52:59.111081 | orchestrator | 2026-02-28 00:52:59 | INFO  | Task ba312156-fe42-4c28-ad1a-fa777ea78ebd is in state STARTED 2026-02-28 00:52:59.112702 | orchestrator | 2026-02-28 00:52:59 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:52:59.113904 | orchestrator | 2026-02-28 00:52:59 | INFO  | Task 9360a8a3-309e-413e-b1d7-52e10f7ee700 is in state STARTED 2026-02-28 00:52:59.116188 | orchestrator | 2026-02-28 00:52:59 | INFO  | Task 62b32f35-7e9c-46e5-ad32-af6d8892e7f1 is in state STARTED 2026-02-28 00:52:59.116291 | orchestrator | 2026-02-28 00:52:59 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:53:02.155576 | orchestrator | 2026-02-28 00:53:02 | INFO  | Task ba312156-fe42-4c28-ad1a-fa777ea78ebd is in state STARTED 2026-02-28 00:53:02.159439 | orchestrator | 2026-02-28 00:53:02 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:53:02.159506 | orchestrator | 2026-02-28 00:53:02 | INFO  | Task 9360a8a3-309e-413e-b1d7-52e10f7ee700 is in state STARTED 2026-02-28 00:53:02.160263 | orchestrator | 2026-02-28 00:53:02 | INFO  | Task 62b32f35-7e9c-46e5-ad32-af6d8892e7f1 is in state STARTED 2026-02-28 00:53:02.160431 | orchestrator | 2026-02-28 00:53:02 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:53:05.194282 | orchestrator | 2026-02-28 00:53:05 | INFO  | Task ba312156-fe42-4c28-ad1a-fa777ea78ebd is in state STARTED 2026-02-28 00:53:05.194689 | orchestrator | 2026-02-28 00:53:05 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:53:05.195442 | orchestrator | 2026-02-28 00:53:05 | INFO  | Task 9360a8a3-309e-413e-b1d7-52e10f7ee700 is in state STARTED 2026-02-28 00:53:05.197731 | orchestrator | 2026-02-28 00:53:05 | INFO  | Task 62b32f35-7e9c-46e5-ad32-af6d8892e7f1 is in state STARTED 2026-02-28 00:53:05.198220 | orchestrator | 2026-02-28 00:53:05 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:53:08.380096 | orchestrator | 2026-02-28 00:53:08 | INFO  | Task ba312156-fe42-4c28-ad1a-fa777ea78ebd is in state STARTED 2026-02-28 00:53:08.380310 | orchestrator | 2026-02-28 00:53:08 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:53:08.381251 | orchestrator | 2026-02-28 00:53:08 | INFO  | Task 9360a8a3-309e-413e-b1d7-52e10f7ee700 is in state STARTED 2026-02-28 00:53:08.381947 | orchestrator | 2026-02-28 00:53:08 | INFO  | Task 62b32f35-7e9c-46e5-ad32-af6d8892e7f1 is in state STARTED 2026-02-28 00:53:08.382902 | orchestrator | 2026-02-28 00:53:08 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:53:11.410699 | orchestrator | 2026-02-28 00:53:11 | INFO  | Task ba312156-fe42-4c28-ad1a-fa777ea78ebd is in state STARTED 2026-02-28 00:53:11.410804 | orchestrator | 2026-02-28 00:53:11 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:53:11.411421 | orchestrator | 2026-02-28 00:53:11 | INFO  | Task 9360a8a3-309e-413e-b1d7-52e10f7ee700 is in state STARTED 2026-02-28 00:53:11.412469 | orchestrator | 2026-02-28 00:53:11 | INFO  | Task 62b32f35-7e9c-46e5-ad32-af6d8892e7f1 is in state STARTED 2026-02-28 00:53:11.412543 | orchestrator | 2026-02-28 00:53:11 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:53:14.461129 | orchestrator | 2026-02-28 00:53:14 | INFO  | Task ba312156-fe42-4c28-ad1a-fa777ea78ebd is in state STARTED 2026-02-28 00:53:14.463593 | orchestrator | 2026-02-28 00:53:14 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:53:14.463692 | orchestrator | 2026-02-28 00:53:14 | INFO  | Task 9360a8a3-309e-413e-b1d7-52e10f7ee700 is in state STARTED 2026-02-28 00:53:14.465310 | orchestrator | 2026-02-28 00:53:14 | INFO  | Task 62b32f35-7e9c-46e5-ad32-af6d8892e7f1 is in state STARTED 2026-02-28 00:53:14.465479 | orchestrator | 2026-02-28 00:53:14 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:53:17.508642 | orchestrator | 2026-02-28 00:53:17 | INFO  | Task ba312156-fe42-4c28-ad1a-fa777ea78ebd is in state STARTED 2026-02-28 00:53:17.508780 | orchestrator | 2026-02-28 00:53:17 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:53:17.508797 | orchestrator | 2026-02-28 00:53:17 | INFO  | Task 9360a8a3-309e-413e-b1d7-52e10f7ee700 is in state STARTED 2026-02-28 00:53:17.508810 | orchestrator | 2026-02-28 00:53:17 | INFO  | Task 62b32f35-7e9c-46e5-ad32-af6d8892e7f1 is in state STARTED 2026-02-28 00:53:17.508824 | orchestrator | 2026-02-28 00:53:17 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:53:20.537568 | orchestrator | 2026-02-28 00:53:20 | INFO  | Task ba312156-fe42-4c28-ad1a-fa777ea78ebd is in state STARTED 2026-02-28 00:53:20.537916 | orchestrator | 2026-02-28 00:53:20 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:53:20.538514 | orchestrator | 2026-02-28 00:53:20 | INFO  | Task 9360a8a3-309e-413e-b1d7-52e10f7ee700 is in state STARTED 2026-02-28 00:53:20.540162 | orchestrator | 2026-02-28 00:53:20 | INFO  | Task 62b32f35-7e9c-46e5-ad32-af6d8892e7f1 is in state STARTED 2026-02-28 00:53:20.540208 | orchestrator | 2026-02-28 00:53:20 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:53:23.560824 | orchestrator | 2026-02-28 00:53:23 | INFO  | Task ba312156-fe42-4c28-ad1a-fa777ea78ebd is in state STARTED 2026-02-28 00:53:23.561479 | orchestrator | 2026-02-28 00:53:23 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:53:23.563421 | orchestrator | 2026-02-28 00:53:23 | INFO  | Task 9360a8a3-309e-413e-b1d7-52e10f7ee700 is in state STARTED 2026-02-28 00:53:23.564532 | orchestrator | 2026-02-28 00:53:23 | INFO  | Task 62b32f35-7e9c-46e5-ad32-af6d8892e7f1 is in state STARTED 2026-02-28 00:53:23.564576 | orchestrator | 2026-02-28 00:53:23 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:53:26.601169 | orchestrator | 2026-02-28 00:53:26 | INFO  | Task ba312156-fe42-4c28-ad1a-fa777ea78ebd is in state STARTED 2026-02-28 00:53:26.602919 | orchestrator | 2026-02-28 00:53:26 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:53:26.604222 | orchestrator | 2026-02-28 00:53:26 | INFO  | Task 9360a8a3-309e-413e-b1d7-52e10f7ee700 is in state STARTED 2026-02-28 00:53:26.607979 | orchestrator | 2026-02-28 00:53:26 | INFO  | Task 62b32f35-7e9c-46e5-ad32-af6d8892e7f1 is in state STARTED 2026-02-28 00:53:26.608087 | orchestrator | 2026-02-28 00:53:26 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:53:29.661690 | orchestrator | 2026-02-28 00:53:29 | INFO  | Task ba312156-fe42-4c28-ad1a-fa777ea78ebd is in state STARTED 2026-02-28 00:53:29.662142 | orchestrator | 2026-02-28 00:53:29 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:53:29.663282 | orchestrator | 2026-02-28 00:53:29 | INFO  | Task 9360a8a3-309e-413e-b1d7-52e10f7ee700 is in state STARTED 2026-02-28 00:53:29.664224 | orchestrator | 2026-02-28 00:53:29 | INFO  | Task 62b32f35-7e9c-46e5-ad32-af6d8892e7f1 is in state STARTED 2026-02-28 00:53:29.664264 | orchestrator | 2026-02-28 00:53:29 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:53:32.702325 | orchestrator | 2026-02-28 00:53:32 | INFO  | Task ba312156-fe42-4c28-ad1a-fa777ea78ebd is in state STARTED 2026-02-28 00:53:32.705009 | orchestrator | 2026-02-28 00:53:32 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:53:32.707313 | orchestrator | 2026-02-28 00:53:32 | INFO  | Task 9360a8a3-309e-413e-b1d7-52e10f7ee700 is in state STARTED 2026-02-28 00:53:32.709536 | orchestrator | 2026-02-28 00:53:32 | INFO  | Task 62b32f35-7e9c-46e5-ad32-af6d8892e7f1 is in state STARTED 2026-02-28 00:53:32.709594 | orchestrator | 2026-02-28 00:53:32 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:53:35.745018 | orchestrator | 2026-02-28 00:53:35 | INFO  | Task ba312156-fe42-4c28-ad1a-fa777ea78ebd is in state STARTED 2026-02-28 00:53:35.745990 | orchestrator | 2026-02-28 00:53:35 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:53:35.747230 | orchestrator | 2026-02-28 00:53:35 | INFO  | Task 9360a8a3-309e-413e-b1d7-52e10f7ee700 is in state STARTED 2026-02-28 00:53:35.748358 | orchestrator | 2026-02-28 00:53:35 | INFO  | Task 62b32f35-7e9c-46e5-ad32-af6d8892e7f1 is in state STARTED 2026-02-28 00:53:35.748454 | orchestrator | 2026-02-28 00:53:35 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:53:38.795982 | orchestrator | 2026-02-28 00:53:38 | INFO  | Task ba312156-fe42-4c28-ad1a-fa777ea78ebd is in state STARTED 2026-02-28 00:53:38.796685 | orchestrator | 2026-02-28 00:53:38 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:53:38.797593 | orchestrator | 2026-02-28 00:53:38 | INFO  | Task 9360a8a3-309e-413e-b1d7-52e10f7ee700 is in state STARTED 2026-02-28 00:53:38.798487 | orchestrator | 2026-02-28 00:53:38 | INFO  | Task 62b32f35-7e9c-46e5-ad32-af6d8892e7f1 is in state STARTED 2026-02-28 00:53:38.798519 | orchestrator | 2026-02-28 00:53:38 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:53:41.848801 | orchestrator | 2026-02-28 00:53:41 | INFO  | Task ba312156-fe42-4c28-ad1a-fa777ea78ebd is in state STARTED 2026-02-28 00:53:41.849121 | orchestrator | 2026-02-28 00:53:41 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:53:41.849848 | orchestrator | 2026-02-28 00:53:41 | INFO  | Task 9360a8a3-309e-413e-b1d7-52e10f7ee700 is in state STARTED 2026-02-28 00:53:41.851162 | orchestrator | 2026-02-28 00:53:41 | INFO  | Task 62b32f35-7e9c-46e5-ad32-af6d8892e7f1 is in state STARTED 2026-02-28 00:53:41.851219 | orchestrator | 2026-02-28 00:53:41 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:53:44.888679 | orchestrator | 2026-02-28 00:53:44 | INFO  | Task ba312156-fe42-4c28-ad1a-fa777ea78ebd is in state STARTED 2026-02-28 00:53:44.890095 | orchestrator | 2026-02-28 00:53:44 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:53:44.891027 | orchestrator | 2026-02-28 00:53:44 | INFO  | Task 9360a8a3-309e-413e-b1d7-52e10f7ee700 is in state STARTED 2026-02-28 00:53:44.892649 | orchestrator | 2026-02-28 00:53:44 | INFO  | Task 62b32f35-7e9c-46e5-ad32-af6d8892e7f1 is in state STARTED 2026-02-28 00:53:44.892705 | orchestrator | 2026-02-28 00:53:44 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:53:47.926823 | orchestrator | 2026-02-28 00:53:47 | INFO  | Task ba312156-fe42-4c28-ad1a-fa777ea78ebd is in state STARTED 2026-02-28 00:53:47.927145 | orchestrator | 2026-02-28 00:53:47 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:53:47.928345 | orchestrator | 2026-02-28 00:53:47 | INFO  | Task 9360a8a3-309e-413e-b1d7-52e10f7ee700 is in state STARTED 2026-02-28 00:53:47.929108 | orchestrator | 2026-02-28 00:53:47 | INFO  | Task 62b32f35-7e9c-46e5-ad32-af6d8892e7f1 is in state STARTED 2026-02-28 00:53:47.929146 | orchestrator | 2026-02-28 00:53:47 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:53:50.968699 | orchestrator | 2026-02-28 00:53:50 | INFO  | Task ba312156-fe42-4c28-ad1a-fa777ea78ebd is in state STARTED 2026-02-28 00:53:50.968898 | orchestrator | 2026-02-28 00:53:50 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:53:50.972542 | orchestrator | 2026-02-28 00:53:50 | INFO  | Task 9360a8a3-309e-413e-b1d7-52e10f7ee700 is in state STARTED 2026-02-28 00:53:50.973822 | orchestrator | 2026-02-28 00:53:50 | INFO  | Task 62b32f35-7e9c-46e5-ad32-af6d8892e7f1 is in state STARTED 2026-02-28 00:53:50.974055 | orchestrator | 2026-02-28 00:53:50 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:53:54.014381 | orchestrator | 2026-02-28 00:53:54 | INFO  | Task ba312156-fe42-4c28-ad1a-fa777ea78ebd is in state STARTED 2026-02-28 00:53:54.015754 | orchestrator | 2026-02-28 00:53:54 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:53:54.017789 | orchestrator | 2026-02-28 00:53:54 | INFO  | Task 9360a8a3-309e-413e-b1d7-52e10f7ee700 is in state STARTED 2026-02-28 00:53:54.019954 | orchestrator | 2026-02-28 00:53:54 | INFO  | Task 62b32f35-7e9c-46e5-ad32-af6d8892e7f1 is in state SUCCESS 2026-02-28 00:53:54.021795 | orchestrator | 2026-02-28 00:53:54.021881 | orchestrator | 2026-02-28 00:53:54.021902 | orchestrator | PLAY [Copy kubeconfig to the configuration repository] ************************* 2026-02-28 00:53:54.021922 | orchestrator | 2026-02-28 00:53:54.021941 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-02-28 00:53:54.021958 | orchestrator | Saturday 28 February 2026 00:52:16 +0000 (0:00:00.204) 0:00:00.204 ***** 2026-02-28 00:53:54.021976 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-02-28 00:53:54.021993 | orchestrator | 2026-02-28 00:53:54.022011 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-02-28 00:53:54.022103 | orchestrator | Saturday 28 February 2026 00:52:17 +0000 (0:00:01.187) 0:00:01.391 ***** 2026-02-28 00:53:54.022115 | orchestrator | changed: [testbed-manager] 2026-02-28 00:53:54.022125 | orchestrator | 2026-02-28 00:53:54.022135 | orchestrator | TASK [Change server address in the kubeconfig file] **************************** 2026-02-28 00:53:54.022145 | orchestrator | Saturday 28 February 2026 00:52:18 +0000 (0:00:01.466) 0:00:02.858 ***** 2026-02-28 00:53:54.022155 | orchestrator | changed: [testbed-manager] 2026-02-28 00:53:54.022165 | orchestrator | 2026-02-28 00:53:54.022174 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-28 00:53:54.022184 | orchestrator | testbed-manager : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-28 00:53:54.022195 | orchestrator | 2026-02-28 00:53:54.022205 | orchestrator | 2026-02-28 00:53:54.022215 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-28 00:53:54.022224 | orchestrator | Saturday 28 February 2026 00:52:19 +0000 (0:00:00.619) 0:00:03.478 ***** 2026-02-28 00:53:54.022234 | orchestrator | =============================================================================== 2026-02-28 00:53:54.022243 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.47s 2026-02-28 00:53:54.022254 | orchestrator | Get kubeconfig file ----------------------------------------------------- 1.19s 2026-02-28 00:53:54.022263 | orchestrator | Change server address in the kubeconfig file ---------------------------- 0.62s 2026-02-28 00:53:54.022273 | orchestrator | 2026-02-28 00:53:54.022283 | orchestrator | 2026-02-28 00:53:54.022292 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-02-28 00:53:54.022302 | orchestrator | 2026-02-28 00:53:54.022311 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-02-28 00:53:54.022321 | orchestrator | Saturday 28 February 2026 00:52:15 +0000 (0:00:00.181) 0:00:00.181 ***** 2026-02-28 00:53:54.022331 | orchestrator | ok: [testbed-manager] 2026-02-28 00:53:54.022341 | orchestrator | 2026-02-28 00:53:54.022350 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-02-28 00:53:54.022360 | orchestrator | Saturday 28 February 2026 00:52:16 +0000 (0:00:00.733) 0:00:00.914 ***** 2026-02-28 00:53:54.022381 | orchestrator | ok: [testbed-manager] 2026-02-28 00:53:54.022393 | orchestrator | 2026-02-28 00:53:54.022461 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-02-28 00:53:54.022473 | orchestrator | Saturday 28 February 2026 00:52:17 +0000 (0:00:00.682) 0:00:01.597 ***** 2026-02-28 00:53:54.022484 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-02-28 00:53:54.022496 | orchestrator | 2026-02-28 00:53:54.022507 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-02-28 00:53:54.022518 | orchestrator | Saturday 28 February 2026 00:52:18 +0000 (0:00:00.799) 0:00:02.397 ***** 2026-02-28 00:53:54.022530 | orchestrator | changed: [testbed-manager] 2026-02-28 00:53:54.022541 | orchestrator | 2026-02-28 00:53:54.022552 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-02-28 00:53:54.022564 | orchestrator | Saturday 28 February 2026 00:52:20 +0000 (0:00:02.082) 0:00:04.479 ***** 2026-02-28 00:53:54.022576 | orchestrator | changed: [testbed-manager] 2026-02-28 00:53:54.022586 | orchestrator | 2026-02-28 00:53:54.022597 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-02-28 00:53:54.022609 | orchestrator | Saturday 28 February 2026 00:52:20 +0000 (0:00:00.647) 0:00:05.127 ***** 2026-02-28 00:53:54.022620 | orchestrator | changed: [testbed-manager -> localhost] 2026-02-28 00:53:54.022631 | orchestrator | 2026-02-28 00:53:54.022640 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-02-28 00:53:54.022650 | orchestrator | Saturday 28 February 2026 00:52:22 +0000 (0:00:01.837) 0:00:06.965 ***** 2026-02-28 00:53:54.022660 | orchestrator | changed: [testbed-manager -> localhost] 2026-02-28 00:53:54.022669 | orchestrator | 2026-02-28 00:53:54.022679 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-02-28 00:53:54.022696 | orchestrator | Saturday 28 February 2026 00:52:24 +0000 (0:00:01.344) 0:00:08.309 ***** 2026-02-28 00:53:54.022703 | orchestrator | ok: [testbed-manager] 2026-02-28 00:53:54.022711 | orchestrator | 2026-02-28 00:53:54.022719 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-02-28 00:53:54.022727 | orchestrator | Saturday 28 February 2026 00:52:24 +0000 (0:00:00.522) 0:00:08.832 ***** 2026-02-28 00:53:54.022735 | orchestrator | ok: [testbed-manager] 2026-02-28 00:53:54.022743 | orchestrator | 2026-02-28 00:53:54.022751 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-28 00:53:54.022759 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-28 00:53:54.022767 | orchestrator | 2026-02-28 00:53:54.022775 | orchestrator | 2026-02-28 00:53:54.022783 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-28 00:53:54.022791 | orchestrator | Saturday 28 February 2026 00:52:24 +0000 (0:00:00.321) 0:00:09.154 ***** 2026-02-28 00:53:54.022799 | orchestrator | =============================================================================== 2026-02-28 00:53:54.022807 | orchestrator | Write kubeconfig file --------------------------------------------------- 2.08s 2026-02-28 00:53:54.022838 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.84s 2026-02-28 00:53:54.022854 | orchestrator | Change server address in the kubeconfig inside the manager service ------ 1.34s 2026-02-28 00:53:54.022881 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.80s 2026-02-28 00:53:54.022896 | orchestrator | Get home directory of operator user ------------------------------------- 0.73s 2026-02-28 00:53:54.022909 | orchestrator | Create .kube directory -------------------------------------------------- 0.68s 2026-02-28 00:53:54.022923 | orchestrator | Change server address in the kubeconfig --------------------------------- 0.65s 2026-02-28 00:53:54.022932 | orchestrator | Set KUBECONFIG environment variable ------------------------------------- 0.52s 2026-02-28 00:53:54.022940 | orchestrator | Enable kubectl command line completion ---------------------------------- 0.32s 2026-02-28 00:53:54.022948 | orchestrator | 2026-02-28 00:53:54.022956 | orchestrator | 2026-02-28 00:53:54.022963 | orchestrator | PLAY [Set kolla_action_rabbitmq] *********************************************** 2026-02-28 00:53:54.022971 | orchestrator | 2026-02-28 00:53:54.022979 | orchestrator | TASK [Inform the user about the following task] ******************************** 2026-02-28 00:53:54.022987 | orchestrator | Saturday 28 February 2026 00:50:26 +0000 (0:00:00.131) 0:00:00.131 ***** 2026-02-28 00:53:54.022995 | orchestrator | ok: [localhost] => { 2026-02-28 00:53:54.023004 | orchestrator |  "msg": "The task 'Check RabbitMQ service' fails if the RabbitMQ service has not yet been deployed. This is fine." 2026-02-28 00:53:54.023012 | orchestrator | } 2026-02-28 00:53:54.023020 | orchestrator | 2026-02-28 00:53:54.023028 | orchestrator | TASK [Check RabbitMQ service] ************************************************** 2026-02-28 00:53:54.023036 | orchestrator | Saturday 28 February 2026 00:50:26 +0000 (0:00:00.059) 0:00:00.190 ***** 2026-02-28 00:53:54.023045 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string RabbitMQ Management in 192.168.16.9:15672"} 2026-02-28 00:53:54.023053 | orchestrator | ...ignoring 2026-02-28 00:53:54.023062 | orchestrator | 2026-02-28 00:53:54.023070 | orchestrator | TASK [Set kolla_action_rabbitmq = upgrade if RabbitMQ is already running] ****** 2026-02-28 00:53:54.023078 | orchestrator | Saturday 28 February 2026 00:50:29 +0000 (0:00:03.149) 0:00:03.339 ***** 2026-02-28 00:53:54.023086 | orchestrator | skipping: [localhost] 2026-02-28 00:53:54.023093 | orchestrator | 2026-02-28 00:53:54.023101 | orchestrator | TASK [Set kolla_action_rabbitmq = kolla_action_ng] ***************************** 2026-02-28 00:53:54.023109 | orchestrator | Saturday 28 February 2026 00:50:29 +0000 (0:00:00.244) 0:00:03.584 ***** 2026-02-28 00:53:54.023117 | orchestrator | ok: [localhost] 2026-02-28 00:53:54.023125 | orchestrator | 2026-02-28 00:53:54.023133 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-28 00:53:54.023148 | orchestrator | 2026-02-28 00:53:54.023156 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-28 00:53:54.023164 | orchestrator | Saturday 28 February 2026 00:50:30 +0000 (0:00:00.461) 0:00:04.046 ***** 2026-02-28 00:53:54.023177 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:53:54.023186 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:53:54.023194 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:53:54.023202 | orchestrator | 2026-02-28 00:53:54.023210 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-28 00:53:54.023218 | orchestrator | Saturday 28 February 2026 00:50:30 +0000 (0:00:00.495) 0:00:04.542 ***** 2026-02-28 00:53:54.023226 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2026-02-28 00:53:54.023234 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2026-02-28 00:53:54.023242 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2026-02-28 00:53:54.023250 | orchestrator | 2026-02-28 00:53:54.023258 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2026-02-28 00:53:54.023266 | orchestrator | 2026-02-28 00:53:54.023274 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-02-28 00:53:54.023282 | orchestrator | Saturday 28 February 2026 00:50:32 +0000 (0:00:01.519) 0:00:06.061 ***** 2026-02-28 00:53:54.023290 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:53:54.023298 | orchestrator | 2026-02-28 00:53:54.023306 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-02-28 00:53:54.023314 | orchestrator | Saturday 28 February 2026 00:50:33 +0000 (0:00:00.725) 0:00:06.787 ***** 2026-02-28 00:53:54.023322 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:53:54.023329 | orchestrator | 2026-02-28 00:53:54.023337 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2026-02-28 00:53:54.023345 | orchestrator | Saturday 28 February 2026 00:50:34 +0000 (0:00:01.261) 0:00:08.048 ***** 2026-02-28 00:53:54.023353 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:53:54.023361 | orchestrator | 2026-02-28 00:53:54.023369 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2026-02-28 00:53:54.023377 | orchestrator | Saturday 28 February 2026 00:50:34 +0000 (0:00:00.405) 0:00:08.454 ***** 2026-02-28 00:53:54.023385 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:53:54.023393 | orchestrator | 2026-02-28 00:53:54.023432 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2026-02-28 00:53:54.023441 | orchestrator | Saturday 28 February 2026 00:50:36 +0000 (0:00:01.490) 0:00:09.945 ***** 2026-02-28 00:53:54.023449 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:53:54.023457 | orchestrator | 2026-02-28 00:53:54.023465 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2026-02-28 00:53:54.023473 | orchestrator | Saturday 28 February 2026 00:50:37 +0000 (0:00:00.973) 0:00:10.918 ***** 2026-02-28 00:53:54.023481 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:53:54.023489 | orchestrator | 2026-02-28 00:53:54.023497 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-02-28 00:53:54.023505 | orchestrator | Saturday 28 February 2026 00:50:39 +0000 (0:00:01.723) 0:00:12.642 ***** 2026-02-28 00:53:54.023513 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:53:54.023521 | orchestrator | 2026-02-28 00:53:54.023529 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-02-28 00:53:54.023542 | orchestrator | Saturday 28 February 2026 00:50:40 +0000 (0:00:01.344) 0:00:13.986 ***** 2026-02-28 00:53:54.023550 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:53:54.023558 | orchestrator | 2026-02-28 00:53:54.023566 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2026-02-28 00:53:54.023574 | orchestrator | Saturday 28 February 2026 00:50:41 +0000 (0:00:00.914) 0:00:14.901 ***** 2026-02-28 00:53:54.023588 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:53:54.023596 | orchestrator | 2026-02-28 00:53:54.023604 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2026-02-28 00:53:54.023612 | orchestrator | Saturday 28 February 2026 00:50:41 +0000 (0:00:00.345) 0:00:15.247 ***** 2026-02-28 00:53:54.023620 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:53:54.023628 | orchestrator | 2026-02-28 00:53:54.023636 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2026-02-28 00:53:54.023644 | orchestrator | Saturday 28 February 2026 00:50:42 +0000 (0:00:00.502) 0:00:15.749 ***** 2026-02-28 00:53:54.023655 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-28 00:53:54.023671 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-28 00:53:54.023682 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-28 00:53:54.023691 | orchestrator | 2026-02-28 00:53:54.023700 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2026-02-28 00:53:54.023708 | orchestrator | Saturday 28 February 2026 00:50:43 +0000 (0:00:01.258) 0:00:17.008 ***** 2026-02-28 00:53:54.023727 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-28 00:53:54.023740 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-28 00:53:54.023750 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-28 00:53:54.023758 | orchestrator | 2026-02-28 00:53:54.023766 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2026-02-28 00:53:54.023775 | orchestrator | Saturday 28 February 2026 00:50:46 +0000 (0:00:03.077) 0:00:20.085 ***** 2026-02-28 00:53:54.023782 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-02-28 00:53:54.023790 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-02-28 00:53:54.023798 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-02-28 00:53:54.023806 | orchestrator | 2026-02-28 00:53:54.023814 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2026-02-28 00:53:54.023822 | orchestrator | Saturday 28 February 2026 00:50:49 +0000 (0:00:02.656) 0:00:22.742 ***** 2026-02-28 00:53:54.023834 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-02-28 00:53:54.023842 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-02-28 00:53:54.023850 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-02-28 00:53:54.023858 | orchestrator | 2026-02-28 00:53:54.023866 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2026-02-28 00:53:54.023878 | orchestrator | Saturday 28 February 2026 00:50:52 +0000 (0:00:03.022) 0:00:25.765 ***** 2026-02-28 00:53:54.023886 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-02-28 00:53:54.023894 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-02-28 00:53:54.023902 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-02-28 00:53:54.023910 | orchestrator | 2026-02-28 00:53:54.023918 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2026-02-28 00:53:54.023926 | orchestrator | Saturday 28 February 2026 00:50:53 +0000 (0:00:01.497) 0:00:27.263 ***** 2026-02-28 00:53:54.023934 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-02-28 00:53:54.023942 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-02-28 00:53:54.023950 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-02-28 00:53:54.023958 | orchestrator | 2026-02-28 00:53:54.023966 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2026-02-28 00:53:54.023974 | orchestrator | Saturday 28 February 2026 00:50:55 +0000 (0:00:02.324) 0:00:29.588 ***** 2026-02-28 00:53:54.023982 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-02-28 00:53:54.023990 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-02-28 00:53:54.023998 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-02-28 00:53:54.024005 | orchestrator | 2026-02-28 00:53:54.024014 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2026-02-28 00:53:54.024021 | orchestrator | Saturday 28 February 2026 00:50:57 +0000 (0:00:01.866) 0:00:31.454 ***** 2026-02-28 00:53:54.024030 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-02-28 00:53:54.024046 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-02-28 00:53:54.024060 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-02-28 00:53:54.024074 | orchestrator | 2026-02-28 00:53:54.024088 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-02-28 00:53:54.024102 | orchestrator | Saturday 28 February 2026 00:51:00 +0000 (0:00:02.238) 0:00:33.692 ***** 2026-02-28 00:53:54.024117 | orchestrator | included: /ansible/roles/rabbitmq/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:53:54.024126 | orchestrator | 2026-02-28 00:53:54.024134 | orchestrator | TASK [service-cert-copy : rabbitmq | Copying over extra CA certificates] ******* 2026-02-28 00:53:54.024142 | orchestrator | Saturday 28 February 2026 00:51:01 +0000 (0:00:01.574) 0:00:35.267 ***** 2026-02-28 00:53:54.024150 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-28 00:53:54.024171 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-28 00:53:54.024181 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-28 00:53:54.024190 | orchestrator | 2026-02-28 00:53:54.024198 | orchestrator | TASK [service-cert-copy : rabbitmq | Copying over backend internal TLS certificate] *** 2026-02-28 00:53:54.024206 | orchestrator | Saturday 28 February 2026 00:51:03 +0000 (0:00:01.646) 0:00:36.914 ***** 2026-02-28 00:53:54.024218 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-28 00:53:54.024232 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-28 00:53:54.024241 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:53:54.024249 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:53:54.024263 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-28 00:53:54.024272 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:53:54.024280 | orchestrator | 2026-02-28 00:53:54.024288 | orchestrator | TASK [service-cert-copy : rabbitmq | Copying over backend internal TLS key] **** 2026-02-28 00:53:54.024296 | orchestrator | Saturday 28 February 2026 00:51:03 +0000 (0:00:00.505) 0:00:37.419 ***** 2026-02-28 00:53:54.024305 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-28 00:53:54.024313 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:53:54.024329 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-28 00:53:54.024343 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:53:54.024352 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-28 00:53:54.024361 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:53:54.024368 | orchestrator | 2026-02-28 00:53:54.024377 | orchestrator | TASK [service-check-containers : rabbitmq | Check containers] ****************** 2026-02-28 00:53:54.024388 | orchestrator | Saturday 28 February 2026 00:51:05 +0000 (0:00:01.442) 0:00:38.861 ***** 2026-02-28 00:53:54.024397 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-28 00:53:54.024428 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-28 00:53:54.024443 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-28 00:53:54.024451 | orchestrator | 2026-02-28 00:53:54.024459 | orchestrator | TASK [service-check-containers : rabbitmq | Notify handlers to restart containers] *** 2026-02-28 00:53:54.024467 | orchestrator | Saturday 28 February 2026 00:51:06 +0000 (0:00:01.430) 0:00:40.291 ***** 2026-02-28 00:53:54.024476 | orchestrator | changed: [testbed-node-0] => { 2026-02-28 00:53:54.024484 | orchestrator |  "msg": "Notifying handlers" 2026-02-28 00:53:54.024492 | orchestrator | } 2026-02-28 00:53:54.024500 | orchestrator | changed: [testbed-node-1] => { 2026-02-28 00:53:54.024508 | orchestrator |  "msg": "Notifying handlers" 2026-02-28 00:53:54.024516 | orchestrator | } 2026-02-28 00:53:54.024524 | orchestrator | changed: [testbed-node-2] => { 2026-02-28 00:53:54.024532 | orchestrator |  "msg": "Notifying handlers" 2026-02-28 00:53:54.024540 | orchestrator | } 2026-02-28 00:53:54.024548 | orchestrator | 2026-02-28 00:53:54.024556 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-28 00:53:54.024564 | orchestrator | Saturday 28 February 2026 00:51:07 +0000 (0:00:00.734) 0:00:41.026 ***** 2026-02-28 00:53:54.024579 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-28 00:53:54.024588 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-28 00:53:54.024604 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:53:54.024612 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:53:54.024621 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-28 00:53:54.024629 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:53:54.024637 | orchestrator | 2026-02-28 00:53:54.024645 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2026-02-28 00:53:54.024653 | orchestrator | Saturday 28 February 2026 00:51:08 +0000 (0:00:01.333) 0:00:42.359 ***** 2026-02-28 00:53:54.024661 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:53:54.024669 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:53:54.024677 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:53:54.024685 | orchestrator | 2026-02-28 00:53:54.024693 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2026-02-28 00:53:54.024701 | orchestrator | Saturday 28 February 2026 00:51:09 +0000 (0:00:01.240) 0:00:43.599 ***** 2026-02-28 00:53:54.024714 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:53:54.024728 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:53:54.024742 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:53:54.024756 | orchestrator | 2026-02-28 00:53:54.024770 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2026-02-28 00:53:54.024785 | orchestrator | Saturday 28 February 2026 00:51:19 +0000 (0:00:09.146) 0:00:52.745 ***** 2026-02-28 00:53:54.024800 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:53:54.024815 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:53:54.024829 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:53:54.024844 | orchestrator | 2026-02-28 00:53:54.024858 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-02-28 00:53:54.024874 | orchestrator | 2026-02-28 00:53:54.024889 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-02-28 00:53:54.024911 | orchestrator | Saturday 28 February 2026 00:51:19 +0000 (0:00:00.766) 0:00:53.512 ***** 2026-02-28 00:53:54.024926 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:53:54.024941 | orchestrator | 2026-02-28 00:53:54.024956 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-02-28 00:53:54.024971 | orchestrator | Saturday 28 February 2026 00:51:20 +0000 (0:00:00.798) 0:00:54.311 ***** 2026-02-28 00:53:54.024986 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:53:54.025000 | orchestrator | 2026-02-28 00:53:54.025015 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-02-28 00:53:54.025039 | orchestrator | Saturday 28 February 2026 00:51:20 +0000 (0:00:00.148) 0:00:54.459 ***** 2026-02-28 00:53:54.025054 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:53:54.025069 | orchestrator | 2026-02-28 00:53:54.025084 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-02-28 00:53:54.025099 | orchestrator | Saturday 28 February 2026 00:51:27 +0000 (0:00:06.920) 0:01:01.380 ***** 2026-02-28 00:53:54.025114 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:53:54.025129 | orchestrator | 2026-02-28 00:53:54.025143 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-02-28 00:53:54.025158 | orchestrator | 2026-02-28 00:53:54.025173 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-02-28 00:53:54.025188 | orchestrator | Saturday 28 February 2026 00:53:17 +0000 (0:01:50.144) 0:02:51.524 ***** 2026-02-28 00:53:54.025203 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:53:54.025218 | orchestrator | 2026-02-28 00:53:54.025233 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-02-28 00:53:54.025248 | orchestrator | Saturday 28 February 2026 00:53:18 +0000 (0:00:00.768) 0:02:52.293 ***** 2026-02-28 00:53:54.025263 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:53:54.025278 | orchestrator | 2026-02-28 00:53:54.025293 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-02-28 00:53:54.025308 | orchestrator | Saturday 28 February 2026 00:53:18 +0000 (0:00:00.219) 0:02:52.512 ***** 2026-02-28 00:53:54.025323 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:53:54.025338 | orchestrator | 2026-02-28 00:53:54.025353 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-02-28 00:53:54.025367 | orchestrator | Saturday 28 February 2026 00:53:21 +0000 (0:00:02.373) 0:02:54.886 ***** 2026-02-28 00:53:54.025382 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:53:54.025397 | orchestrator | 2026-02-28 00:53:54.025436 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-02-28 00:53:54.025451 | orchestrator | 2026-02-28 00:53:54.025466 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-02-28 00:53:54.025481 | orchestrator | Saturday 28 February 2026 00:53:33 +0000 (0:00:12.260) 0:03:07.147 ***** 2026-02-28 00:53:54.025496 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:53:54.025511 | orchestrator | 2026-02-28 00:53:54.025532 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-02-28 00:53:54.025547 | orchestrator | Saturday 28 February 2026 00:53:34 +0000 (0:00:00.942) 0:03:08.090 ***** 2026-02-28 00:53:54.025562 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:53:54.025577 | orchestrator | 2026-02-28 00:53:54.025592 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-02-28 00:53:54.025607 | orchestrator | Saturday 28 February 2026 00:53:34 +0000 (0:00:00.359) 0:03:08.450 ***** 2026-02-28 00:53:54.025621 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:53:54.025636 | orchestrator | 2026-02-28 00:53:54.025651 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-02-28 00:53:54.025666 | orchestrator | Saturday 28 February 2026 00:53:36 +0000 (0:00:02.034) 0:03:10.484 ***** 2026-02-28 00:53:54.025680 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:53:54.025695 | orchestrator | 2026-02-28 00:53:54.025710 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2026-02-28 00:53:54.025725 | orchestrator | 2026-02-28 00:53:54.025740 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2026-02-28 00:53:54.025755 | orchestrator | Saturday 28 February 2026 00:53:48 +0000 (0:00:11.968) 0:03:22.453 ***** 2026-02-28 00:53:54.025769 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:53:54.025784 | orchestrator | 2026-02-28 00:53:54.025798 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2026-02-28 00:53:54.025813 | orchestrator | Saturday 28 February 2026 00:53:49 +0000 (0:00:00.522) 0:03:22.975 ***** 2026-02-28 00:53:54.025828 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:53:54.025851 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:53:54.025866 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:53:54.025881 | orchestrator | 2026-02-28 00:53:54.025896 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-28 00:53:54.025911 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2026-02-28 00:53:54.025926 | orchestrator | testbed-node-0 : ok=26  changed=16  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2026-02-28 00:53:54.025941 | orchestrator | testbed-node-1 : ok=24  changed=16  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-28 00:53:54.025956 | orchestrator | testbed-node-2 : ok=24  changed=16  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-28 00:53:54.025971 | orchestrator | 2026-02-28 00:53:54.025986 | orchestrator | 2026-02-28 00:53:54.026001 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-28 00:53:54.026102 | orchestrator | Saturday 28 February 2026 00:53:51 +0000 (0:00:02.358) 0:03:25.333 ***** 2026-02-28 00:53:54.026123 | orchestrator | =============================================================================== 2026-02-28 00:53:54.026139 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------ 134.37s 2026-02-28 00:53:54.026162 | orchestrator | rabbitmq : Restart rabbitmq container ---------------------------------- 11.33s 2026-02-28 00:53:54.026177 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------- 9.15s 2026-02-28 00:53:54.026192 | orchestrator | Check RabbitMQ service -------------------------------------------------- 3.15s 2026-02-28 00:53:54.026208 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 3.08s 2026-02-28 00:53:54.026222 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 3.02s 2026-02-28 00:53:54.026237 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 2.66s 2026-02-28 00:53:54.026252 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 2.51s 2026-02-28 00:53:54.026267 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 2.36s 2026-02-28 00:53:54.026282 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 2.32s 2026-02-28 00:53:54.026297 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 2.24s 2026-02-28 00:53:54.026312 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 1.87s 2026-02-28 00:53:54.026327 | orchestrator | rabbitmq : Catch when RabbitMQ is being downgraded ---------------------- 1.72s 2026-02-28 00:53:54.026342 | orchestrator | service-cert-copy : rabbitmq | Copying over extra CA certificates ------- 1.65s 2026-02-28 00:53:54.026357 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 1.58s 2026-02-28 00:53:54.026371 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.52s 2026-02-28 00:53:54.026422 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 1.50s 2026-02-28 00:53:54.026437 | orchestrator | rabbitmq : Get new RabbitMQ version ------------------------------------- 1.49s 2026-02-28 00:53:54.026489 | orchestrator | service-cert-copy : rabbitmq | Copying over backend internal TLS key ---- 1.44s 2026-02-28 00:53:54.026508 | orchestrator | service-check-containers : rabbitmq | Check containers ------------------ 1.43s 2026-02-28 00:53:54.026528 | orchestrator | 2026-02-28 00:53:54 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:53:57.055601 | orchestrator | 2026-02-28 00:53:57 | INFO  | Task ba312156-fe42-4c28-ad1a-fa777ea78ebd is in state STARTED 2026-02-28 00:53:57.056097 | orchestrator | 2026-02-28 00:53:57 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:53:57.056893 | orchestrator | 2026-02-28 00:53:57 | INFO  | Task 9360a8a3-309e-413e-b1d7-52e10f7ee700 is in state STARTED 2026-02-28 00:53:57.057163 | orchestrator | 2026-02-28 00:53:57 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:54:00.083977 | orchestrator | 2026-02-28 00:54:00 | INFO  | Task ba312156-fe42-4c28-ad1a-fa777ea78ebd is in state STARTED 2026-02-28 00:54:00.084474 | orchestrator | 2026-02-28 00:54:00 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:54:00.085689 | orchestrator | 2026-02-28 00:54:00 | INFO  | Task 9360a8a3-309e-413e-b1d7-52e10f7ee700 is in state STARTED 2026-02-28 00:54:00.085778 | orchestrator | 2026-02-28 00:54:00 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:54:03.121004 | orchestrator | 2026-02-28 00:54:03 | INFO  | Task ba312156-fe42-4c28-ad1a-fa777ea78ebd is in state STARTED 2026-02-28 00:54:03.121732 | orchestrator | 2026-02-28 00:54:03 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:54:03.122756 | orchestrator | 2026-02-28 00:54:03 | INFO  | Task 9360a8a3-309e-413e-b1d7-52e10f7ee700 is in state STARTED 2026-02-28 00:54:03.122806 | orchestrator | 2026-02-28 00:54:03 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:54:06.166869 | orchestrator | 2026-02-28 00:54:06 | INFO  | Task ba312156-fe42-4c28-ad1a-fa777ea78ebd is in state STARTED 2026-02-28 00:54:06.167416 | orchestrator | 2026-02-28 00:54:06 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:54:06.168265 | orchestrator | 2026-02-28 00:54:06 | INFO  | Task 9360a8a3-309e-413e-b1d7-52e10f7ee700 is in state STARTED 2026-02-28 00:54:06.168323 | orchestrator | 2026-02-28 00:54:06 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:54:09.216788 | orchestrator | 2026-02-28 00:54:09 | INFO  | Task ba312156-fe42-4c28-ad1a-fa777ea78ebd is in state STARTED 2026-02-28 00:54:09.218410 | orchestrator | 2026-02-28 00:54:09 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:54:09.219889 | orchestrator | 2026-02-28 00:54:09 | INFO  | Task 9360a8a3-309e-413e-b1d7-52e10f7ee700 is in state STARTED 2026-02-28 00:54:09.220027 | orchestrator | 2026-02-28 00:54:09 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:54:12.267569 | orchestrator | 2026-02-28 00:54:12 | INFO  | Task ba312156-fe42-4c28-ad1a-fa777ea78ebd is in state STARTED 2026-02-28 00:54:12.267737 | orchestrator | 2026-02-28 00:54:12 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:54:12.268323 | orchestrator | 2026-02-28 00:54:12 | INFO  | Task 9360a8a3-309e-413e-b1d7-52e10f7ee700 is in state STARTED 2026-02-28 00:54:12.268345 | orchestrator | 2026-02-28 00:54:12 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:54:15.321570 | orchestrator | 2026-02-28 00:54:15 | INFO  | Task ba312156-fe42-4c28-ad1a-fa777ea78ebd is in state STARTED 2026-02-28 00:54:15.323979 | orchestrator | 2026-02-28 00:54:15 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:54:15.325884 | orchestrator | 2026-02-28 00:54:15 | INFO  | Task 9360a8a3-309e-413e-b1d7-52e10f7ee700 is in state STARTED 2026-02-28 00:54:15.325936 | orchestrator | 2026-02-28 00:54:15 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:54:18.363381 | orchestrator | 2026-02-28 00:54:18 | INFO  | Task ba312156-fe42-4c28-ad1a-fa777ea78ebd is in state STARTED 2026-02-28 00:54:18.364067 | orchestrator | 2026-02-28 00:54:18 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:54:18.367505 | orchestrator | 2026-02-28 00:54:18 | INFO  | Task 9360a8a3-309e-413e-b1d7-52e10f7ee700 is in state STARTED 2026-02-28 00:54:18.367590 | orchestrator | 2026-02-28 00:54:18 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:54:21.487904 | orchestrator | 2026-02-28 00:54:21 | INFO  | Task ba312156-fe42-4c28-ad1a-fa777ea78ebd is in state STARTED 2026-02-28 00:54:21.488111 | orchestrator | 2026-02-28 00:54:21 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:54:21.488952 | orchestrator | 2026-02-28 00:54:21 | INFO  | Task 9360a8a3-309e-413e-b1d7-52e10f7ee700 is in state STARTED 2026-02-28 00:54:21.488983 | orchestrator | 2026-02-28 00:54:21 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:54:24.524309 | orchestrator | 2026-02-28 00:54:24 | INFO  | Task ba312156-fe42-4c28-ad1a-fa777ea78ebd is in state STARTED 2026-02-28 00:54:24.526379 | orchestrator | 2026-02-28 00:54:24 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:54:24.527013 | orchestrator | 2026-02-28 00:54:24 | INFO  | Task 9360a8a3-309e-413e-b1d7-52e10f7ee700 is in state STARTED 2026-02-28 00:54:24.527052 | orchestrator | 2026-02-28 00:54:24 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:54:27.560672 | orchestrator | 2026-02-28 00:54:27 | INFO  | Task ba312156-fe42-4c28-ad1a-fa777ea78ebd is in state STARTED 2026-02-28 00:54:27.560771 | orchestrator | 2026-02-28 00:54:27 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:54:27.560793 | orchestrator | 2026-02-28 00:54:27 | INFO  | Task 9360a8a3-309e-413e-b1d7-52e10f7ee700 is in state STARTED 2026-02-28 00:54:27.560810 | orchestrator | 2026-02-28 00:54:27 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:54:30.590782 | orchestrator | 2026-02-28 00:54:30 | INFO  | Task ba312156-fe42-4c28-ad1a-fa777ea78ebd is in state STARTED 2026-02-28 00:54:30.594128 | orchestrator | 2026-02-28 00:54:30 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:54:30.596996 | orchestrator | 2026-02-28 00:54:30 | INFO  | Task 9360a8a3-309e-413e-b1d7-52e10f7ee700 is in state STARTED 2026-02-28 00:54:30.597049 | orchestrator | 2026-02-28 00:54:30 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:54:33.628074 | orchestrator | 2026-02-28 00:54:33 | INFO  | Task ba312156-fe42-4c28-ad1a-fa777ea78ebd is in state STARTED 2026-02-28 00:54:33.628516 | orchestrator | 2026-02-28 00:54:33 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:54:33.629524 | orchestrator | 2026-02-28 00:54:33 | INFO  | Task 9360a8a3-309e-413e-b1d7-52e10f7ee700 is in state STARTED 2026-02-28 00:54:33.629553 | orchestrator | 2026-02-28 00:54:33 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:54:36.670333 | orchestrator | 2026-02-28 00:54:36 | INFO  | Task ba312156-fe42-4c28-ad1a-fa777ea78ebd is in state STARTED 2026-02-28 00:54:36.672082 | orchestrator | 2026-02-28 00:54:36 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:54:36.673554 | orchestrator | 2026-02-28 00:54:36 | INFO  | Task 9360a8a3-309e-413e-b1d7-52e10f7ee700 is in state STARTED 2026-02-28 00:54:36.673605 | orchestrator | 2026-02-28 00:54:36 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:54:39.707862 | orchestrator | 2026-02-28 00:54:39 | INFO  | Task ba312156-fe42-4c28-ad1a-fa777ea78ebd is in state STARTED 2026-02-28 00:54:39.707963 | orchestrator | 2026-02-28 00:54:39 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:54:39.708744 | orchestrator | 2026-02-28 00:54:39 | INFO  | Task 9360a8a3-309e-413e-b1d7-52e10f7ee700 is in state STARTED 2026-02-28 00:54:39.708771 | orchestrator | 2026-02-28 00:54:39 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:54:42.750704 | orchestrator | 2026-02-28 00:54:42 | INFO  | Task ba312156-fe42-4c28-ad1a-fa777ea78ebd is in state STARTED 2026-02-28 00:54:42.754335 | orchestrator | 2026-02-28 00:54:42 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:54:42.757131 | orchestrator | 2026-02-28 00:54:42 | INFO  | Task 9360a8a3-309e-413e-b1d7-52e10f7ee700 is in state STARTED 2026-02-28 00:54:42.757253 | orchestrator | 2026-02-28 00:54:42 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:54:45.788515 | orchestrator | 2026-02-28 00:54:45 | INFO  | Task ba312156-fe42-4c28-ad1a-fa777ea78ebd is in state STARTED 2026-02-28 00:54:45.788965 | orchestrator | 2026-02-28 00:54:45 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:54:45.789517 | orchestrator | 2026-02-28 00:54:45 | INFO  | Task 9360a8a3-309e-413e-b1d7-52e10f7ee700 is in state STARTED 2026-02-28 00:54:45.789594 | orchestrator | 2026-02-28 00:54:45 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:54:48.815792 | orchestrator | 2026-02-28 00:54:48 | INFO  | Task ba312156-fe42-4c28-ad1a-fa777ea78ebd is in state STARTED 2026-02-28 00:54:48.815974 | orchestrator | 2026-02-28 00:54:48 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:54:48.816728 | orchestrator | 2026-02-28 00:54:48 | INFO  | Task 9360a8a3-309e-413e-b1d7-52e10f7ee700 is in state STARTED 2026-02-28 00:54:48.816758 | orchestrator | 2026-02-28 00:54:48 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:54:51.854901 | orchestrator | 2026-02-28 00:54:51 | INFO  | Task ba312156-fe42-4c28-ad1a-fa777ea78ebd is in state STARTED 2026-02-28 00:54:51.855078 | orchestrator | 2026-02-28 00:54:51 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:54:51.855740 | orchestrator | 2026-02-28 00:54:51 | INFO  | Task 9360a8a3-309e-413e-b1d7-52e10f7ee700 is in state STARTED 2026-02-28 00:54:51.855779 | orchestrator | 2026-02-28 00:54:51 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:54:54.886580 | orchestrator | 2026-02-28 00:54:54 | INFO  | Task ba312156-fe42-4c28-ad1a-fa777ea78ebd is in state STARTED 2026-02-28 00:54:54.886822 | orchestrator | 2026-02-28 00:54:54 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:54:54.888201 | orchestrator | 2026-02-28 00:54:54 | INFO  | Task 9360a8a3-309e-413e-b1d7-52e10f7ee700 is in state STARTED 2026-02-28 00:54:54.888251 | orchestrator | 2026-02-28 00:54:54 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:54:57.908568 | orchestrator | 2026-02-28 00:54:57 | INFO  | Task ba312156-fe42-4c28-ad1a-fa777ea78ebd is in state STARTED 2026-02-28 00:54:57.908987 | orchestrator | 2026-02-28 00:54:57 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:54:57.909727 | orchestrator | 2026-02-28 00:54:57 | INFO  | Task 9360a8a3-309e-413e-b1d7-52e10f7ee700 is in state STARTED 2026-02-28 00:54:57.909756 | orchestrator | 2026-02-28 00:54:57 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:55:00.944354 | orchestrator | 2026-02-28 00:55:00 | INFO  | Task ba312156-fe42-4c28-ad1a-fa777ea78ebd is in state STARTED 2026-02-28 00:55:00.945914 | orchestrator | 2026-02-28 00:55:00 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:55:00.946910 | orchestrator | 2026-02-28 00:55:00 | INFO  | Task 9360a8a3-309e-413e-b1d7-52e10f7ee700 is in state STARTED 2026-02-28 00:55:00.947276 | orchestrator | 2026-02-28 00:55:00 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:55:03.984670 | orchestrator | 2026-02-28 00:55:03 | INFO  | Task ba312156-fe42-4c28-ad1a-fa777ea78ebd is in state STARTED 2026-02-28 00:55:03.985927 | orchestrator | 2026-02-28 00:55:03 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:55:03.989614 | orchestrator | 2026-02-28 00:55:03 | INFO  | Task 9360a8a3-309e-413e-b1d7-52e10f7ee700 is in state STARTED 2026-02-28 00:55:03.989690 | orchestrator | 2026-02-28 00:55:03 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:55:07.022671 | orchestrator | 2026-02-28 00:55:07 | INFO  | Task ba312156-fe42-4c28-ad1a-fa777ea78ebd is in state STARTED 2026-02-28 00:55:07.023268 | orchestrator | 2026-02-28 00:55:07 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:55:07.023976 | orchestrator | 2026-02-28 00:55:07 | INFO  | Task 9360a8a3-309e-413e-b1d7-52e10f7ee700 is in state STARTED 2026-02-28 00:55:07.024071 | orchestrator | 2026-02-28 00:55:07 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:55:10.056740 | orchestrator | 2026-02-28 00:55:10 | INFO  | Task ba312156-fe42-4c28-ad1a-fa777ea78ebd is in state STARTED 2026-02-28 00:55:10.059543 | orchestrator | 2026-02-28 00:55:10 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:55:10.060089 | orchestrator | 2026-02-28 00:55:10 | INFO  | Task 9360a8a3-309e-413e-b1d7-52e10f7ee700 is in state STARTED 2026-02-28 00:55:10.060114 | orchestrator | 2026-02-28 00:55:10 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:55:13.093951 | orchestrator | 2026-02-28 00:55:13 | INFO  | Task ba312156-fe42-4c28-ad1a-fa777ea78ebd is in state SUCCESS 2026-02-28 00:55:13.095756 | orchestrator | 2026-02-28 00:55:13.095819 | orchestrator | 2026-02-28 00:55:13.095833 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-28 00:55:13.095846 | orchestrator | 2026-02-28 00:55:13.095857 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-28 00:55:13.095869 | orchestrator | Saturday 28 February 2026 00:51:29 +0000 (0:00:00.219) 0:00:00.219 ***** 2026-02-28 00:55:13.095880 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:55:13.095893 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:55:13.095904 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:55:13.095915 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:55:13.095926 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:55:13.095937 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:55:13.095991 | orchestrator | 2026-02-28 00:55:13.096005 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-28 00:55:13.096017 | orchestrator | Saturday 28 February 2026 00:51:30 +0000 (0:00:00.841) 0:00:01.061 ***** 2026-02-28 00:55:13.096028 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2026-02-28 00:55:13.096041 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2026-02-28 00:55:13.096052 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2026-02-28 00:55:13.096068 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2026-02-28 00:55:13.096087 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2026-02-28 00:55:13.096106 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2026-02-28 00:55:13.096124 | orchestrator | 2026-02-28 00:55:13.096325 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2026-02-28 00:55:13.096348 | orchestrator | 2026-02-28 00:55:13.096378 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2026-02-28 00:55:13.096391 | orchestrator | Saturday 28 February 2026 00:51:31 +0000 (0:00:01.063) 0:00:02.124 ***** 2026-02-28 00:55:13.096407 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:55:13.096421 | orchestrator | 2026-02-28 00:55:13.096434 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2026-02-28 00:55:13.096447 | orchestrator | Saturday 28 February 2026 00:51:32 +0000 (0:00:01.388) 0:00:03.513 ***** 2026-02-28 00:55:13.096486 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:55:13.096526 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:55:13.096541 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:55:13.096554 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:55:13.096568 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:55:13.096581 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:55:13.096594 | orchestrator | 2026-02-28 00:55:13.096623 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2026-02-28 00:55:13.096636 | orchestrator | Saturday 28 February 2026 00:51:34 +0000 (0:00:01.689) 0:00:05.202 ***** 2026-02-28 00:55:13.096649 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:55:13.096663 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:55:13.096677 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:55:13.096698 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:55:13.096709 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:55:13.096720 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:55:13.096731 | orchestrator | 2026-02-28 00:55:13.096742 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2026-02-28 00:55:13.096753 | orchestrator | Saturday 28 February 2026 00:51:37 +0000 (0:00:02.985) 0:00:08.188 ***** 2026-02-28 00:55:13.096912 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:55:13.096939 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:55:13.097011 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:55:13.097026 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:55:13.097037 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:55:13.097065 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:55:13.097076 | orchestrator | 2026-02-28 00:55:13.097088 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2026-02-28 00:55:13.097100 | orchestrator | Saturday 28 February 2026 00:51:39 +0000 (0:00:01.922) 0:00:10.110 ***** 2026-02-28 00:55:13.097154 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:55:13.097169 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:55:13.097180 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:55:13.097192 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:55:13.097203 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:55:13.097214 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:55:13.097225 | orchestrator | 2026-02-28 00:55:13.097243 | orchestrator | TASK [service-check-containers : ovn_controller | Check containers] ************ 2026-02-28 00:55:13.097255 | orchestrator | Saturday 28 February 2026 00:51:41 +0000 (0:00:02.553) 0:00:12.663 ***** 2026-02-28 00:55:13.097266 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:55:13.097285 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:55:13.097303 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:55:13.097314 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:55:13.097325 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:55:13.097337 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:55:13.097348 | orchestrator | 2026-02-28 00:55:13.097359 | orchestrator | TASK [service-check-containers : ovn_controller | Notify handlers to restart containers] *** 2026-02-28 00:55:13.097371 | orchestrator | Saturday 28 February 2026 00:51:44 +0000 (0:00:02.364) 0:00:15.028 ***** 2026-02-28 00:55:13.097382 | orchestrator | changed: [testbed-node-3] => { 2026-02-28 00:55:13.097393 | orchestrator |  "msg": "Notifying handlers" 2026-02-28 00:55:13.097405 | orchestrator | } 2026-02-28 00:55:13.097416 | orchestrator | changed: [testbed-node-4] => { 2026-02-28 00:55:13.097427 | orchestrator |  "msg": "Notifying handlers" 2026-02-28 00:55:13.097438 | orchestrator | } 2026-02-28 00:55:13.097449 | orchestrator | changed: [testbed-node-5] => { 2026-02-28 00:55:13.097460 | orchestrator |  "msg": "Notifying handlers" 2026-02-28 00:55:13.097470 | orchestrator | } 2026-02-28 00:55:13.097481 | orchestrator | changed: [testbed-node-0] => { 2026-02-28 00:55:13.097512 | orchestrator |  "msg": "Notifying handlers" 2026-02-28 00:55:13.097524 | orchestrator | } 2026-02-28 00:55:13.097535 | orchestrator | changed: [testbed-node-1] => { 2026-02-28 00:55:13.097546 | orchestrator |  "msg": "Notifying handlers" 2026-02-28 00:55:13.097556 | orchestrator | } 2026-02-28 00:55:13.097567 | orchestrator | changed: [testbed-node-2] => { 2026-02-28 00:55:13.097578 | orchestrator |  "msg": "Notifying handlers" 2026-02-28 00:55:13.097589 | orchestrator | } 2026-02-28 00:55:13.097599 | orchestrator | 2026-02-28 00:55:13.097611 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-28 00:55:13.097621 | orchestrator | Saturday 28 February 2026 00:51:44 +0000 (0:00:00.818) 0:00:15.847 ***** 2026-02-28 00:55:13.097640 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:55:13.097659 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:55:13.097671 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:55:13.097682 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:55:13.097694 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:55:13.097710 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:55:13.097721 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:55:13.097732 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:55:13.097744 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:55:13.097754 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:55:13.097765 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:55:13.097776 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:55:13.097787 | orchestrator | 2026-02-28 00:55:13.097798 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2026-02-28 00:55:13.097809 | orchestrator | Saturday 28 February 2026 00:51:46 +0000 (0:00:01.864) 0:00:17.711 ***** 2026-02-28 00:55:13.097820 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:55:13.097831 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:55:13.097841 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:55:13.097852 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:55:13.097863 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:55:13.097873 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:55:13.097884 | orchestrator | 2026-02-28 00:55:13.097895 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2026-02-28 00:55:13.097906 | orchestrator | Saturday 28 February 2026 00:51:49 +0000 (0:00:02.957) 0:00:20.669 ***** 2026-02-28 00:55:13.097916 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2026-02-28 00:55:13.097937 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2026-02-28 00:55:13.097948 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-28 00:55:13.097959 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2026-02-28 00:55:13.097970 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-28 00:55:13.097981 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2026-02-28 00:55:13.097992 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-02-28 00:55:13.098004 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2026-02-28 00:55:13.098015 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2026-02-28 00:55:13.098101 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-28 00:55:13.098113 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-02-28 00:55:13.098130 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-28 00:55:13.098141 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-28 00:55:13.098153 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-28 00:55:13.098165 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-28 00:55:13.098188 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-02-28 00:55:13.098199 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-28 00:55:13.098211 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-02-28 00:55:13.098222 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-02-28 00:55:13.098233 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-28 00:55:13.098250 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-02-28 00:55:13.098262 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-28 00:55:13.098282 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-28 00:55:13.098301 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-28 00:55:13.098319 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-28 00:55:13.098338 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-28 00:55:13.098356 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-28 00:55:13.098376 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-28 00:55:13.098393 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-28 00:55:13.098412 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-28 00:55:13.098444 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-02-28 00:55:13.098462 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-28 00:55:13.098482 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-28 00:55:13.098571 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-28 00:55:13.098590 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-02-28 00:55:13.098607 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-28 00:55:13.098619 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-28 00:55:13.098630 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2026-02-28 00:55:13.098642 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-28 00:55:13.098672 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-02-28 00:55:13.098684 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2026-02-28 00:55:13.098695 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-02-28 00:55:13.098706 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-02-28 00:55:13.098718 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-02-28 00:55:13.098729 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-02-28 00:55:13.098739 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-02-28 00:55:13.098750 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2026-02-28 00:55:13.098782 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2026-02-28 00:55:13.098794 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2026-02-28 00:55:13.098805 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2026-02-28 00:55:13.098816 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-02-28 00:55:13.098827 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-02-28 00:55:13.098838 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-02-28 00:55:13.098849 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-02-28 00:55:13.098860 | orchestrator | 2026-02-28 00:55:13.098872 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-28 00:55:13.098883 | orchestrator | Saturday 28 February 2026 00:52:13 +0000 (0:00:23.445) 0:00:44.115 ***** 2026-02-28 00:55:13.098894 | orchestrator | 2026-02-28 00:55:13.098912 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-28 00:55:13.098923 | orchestrator | Saturday 28 February 2026 00:52:13 +0000 (0:00:00.107) 0:00:44.222 ***** 2026-02-28 00:55:13.098943 | orchestrator | 2026-02-28 00:55:13.098954 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-28 00:55:13.098978 | orchestrator | Saturday 28 February 2026 00:52:13 +0000 (0:00:00.092) 0:00:44.315 ***** 2026-02-28 00:55:13.098990 | orchestrator | 2026-02-28 00:55:13.099001 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-28 00:55:13.099011 | orchestrator | Saturday 28 February 2026 00:52:13 +0000 (0:00:00.098) 0:00:44.413 ***** 2026-02-28 00:55:13.099022 | orchestrator | 2026-02-28 00:55:13.099033 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-28 00:55:13.099044 | orchestrator | Saturday 28 February 2026 00:52:13 +0000 (0:00:00.078) 0:00:44.492 ***** 2026-02-28 00:55:13.099055 | orchestrator | 2026-02-28 00:55:13.099065 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-28 00:55:13.099076 | orchestrator | Saturday 28 February 2026 00:52:13 +0000 (0:00:00.082) 0:00:44.574 ***** 2026-02-28 00:55:13.099087 | orchestrator | 2026-02-28 00:55:13.099097 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2026-02-28 00:55:13.099107 | orchestrator | Saturday 28 February 2026 00:52:13 +0000 (0:00:00.086) 0:00:44.660 ***** 2026-02-28 00:55:13.099117 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:55:13.099127 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:55:13.099136 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:55:13.099146 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:55:13.099156 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:55:13.099181 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:55:13.099191 | orchestrator | 2026-02-28 00:55:13.099201 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2026-02-28 00:55:13.099211 | orchestrator | Saturday 28 February 2026 00:52:16 +0000 (0:00:02.516) 0:00:47.176 ***** 2026-02-28 00:55:13.099220 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:55:13.099230 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:55:13.099240 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:55:13.099249 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:55:13.099259 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:55:13.099268 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:55:13.099278 | orchestrator | 2026-02-28 00:55:13.099288 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2026-02-28 00:55:13.099297 | orchestrator | 2026-02-28 00:55:13.099307 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-02-28 00:55:13.099317 | orchestrator | Saturday 28 February 2026 00:52:25 +0000 (0:00:09.167) 0:00:56.344 ***** 2026-02-28 00:55:13.099326 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:55:13.099336 | orchestrator | 2026-02-28 00:55:13.099346 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-02-28 00:55:13.099356 | orchestrator | Saturday 28 February 2026 00:52:26 +0000 (0:00:00.633) 0:00:56.977 ***** 2026-02-28 00:55:13.099366 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:55:13.099375 | orchestrator | 2026-02-28 00:55:13.099385 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2026-02-28 00:55:13.099395 | orchestrator | Saturday 28 February 2026 00:52:26 +0000 (0:00:00.813) 0:00:57.790 ***** 2026-02-28 00:55:13.099404 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:55:13.099414 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:55:13.099424 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:55:13.099433 | orchestrator | 2026-02-28 00:55:13.099444 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2026-02-28 00:55:13.099462 | orchestrator | Saturday 28 February 2026 00:52:27 +0000 (0:00:00.899) 0:00:58.690 ***** 2026-02-28 00:55:13.099478 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:55:13.099553 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:55:13.099572 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:55:13.099588 | orchestrator | 2026-02-28 00:55:13.099617 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2026-02-28 00:55:13.099633 | orchestrator | Saturday 28 February 2026 00:52:28 +0000 (0:00:00.356) 0:00:59.046 ***** 2026-02-28 00:55:13.099651 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:55:13.099668 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:55:13.099686 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:55:13.099702 | orchestrator | 2026-02-28 00:55:13.099719 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2026-02-28 00:55:13.099745 | orchestrator | Saturday 28 February 2026 00:52:28 +0000 (0:00:00.600) 0:00:59.647 ***** 2026-02-28 00:55:13.099763 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:55:13.099781 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:55:13.099797 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:55:13.099814 | orchestrator | 2026-02-28 00:55:13.099831 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2026-02-28 00:55:13.099843 | orchestrator | Saturday 28 February 2026 00:52:29 +0000 (0:00:00.436) 0:01:00.083 ***** 2026-02-28 00:55:13.099853 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:55:13.099863 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:55:13.099872 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:55:13.099882 | orchestrator | 2026-02-28 00:55:13.099891 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2026-02-28 00:55:13.099901 | orchestrator | Saturday 28 February 2026 00:52:29 +0000 (0:00:00.357) 0:01:00.440 ***** 2026-02-28 00:55:13.099911 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:55:13.099920 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:55:13.099930 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:55:13.099939 | orchestrator | 2026-02-28 00:55:13.099949 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2026-02-28 00:55:13.099959 | orchestrator | Saturday 28 February 2026 00:52:29 +0000 (0:00:00.335) 0:01:00.776 ***** 2026-02-28 00:55:13.099967 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:55:13.099975 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:55:13.099983 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:55:13.099991 | orchestrator | 2026-02-28 00:55:13.099999 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2026-02-28 00:55:13.100014 | orchestrator | Saturday 28 February 2026 00:52:30 +0000 (0:00:00.598) 0:01:01.375 ***** 2026-02-28 00:55:13.100022 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:55:13.100031 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:55:13.100038 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:55:13.100046 | orchestrator | 2026-02-28 00:55:13.100054 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2026-02-28 00:55:13.100062 | orchestrator | Saturday 28 February 2026 00:52:30 +0000 (0:00:00.382) 0:01:01.758 ***** 2026-02-28 00:55:13.100070 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:55:13.100078 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:55:13.100086 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:55:13.100093 | orchestrator | 2026-02-28 00:55:13.100107 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2026-02-28 00:55:13.100120 | orchestrator | Saturday 28 February 2026 00:52:31 +0000 (0:00:00.454) 0:01:02.212 ***** 2026-02-28 00:55:13.100133 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:55:13.100147 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:55:13.100160 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:55:13.100174 | orchestrator | 2026-02-28 00:55:13.100183 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2026-02-28 00:55:13.100191 | orchestrator | Saturday 28 February 2026 00:52:31 +0000 (0:00:00.482) 0:01:02.695 ***** 2026-02-28 00:55:13.100199 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:55:13.100207 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:55:13.100215 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:55:13.100223 | orchestrator | 2026-02-28 00:55:13.100230 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2026-02-28 00:55:13.100249 | orchestrator | Saturday 28 February 2026 00:52:32 +0000 (0:00:00.549) 0:01:03.244 ***** 2026-02-28 00:55:13.100258 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:55:13.100265 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:55:13.100273 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:55:13.100281 | orchestrator | 2026-02-28 00:55:13.100289 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2026-02-28 00:55:13.100297 | orchestrator | Saturday 28 February 2026 00:52:32 +0000 (0:00:00.381) 0:01:03.625 ***** 2026-02-28 00:55:13.100305 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:55:13.100313 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:55:13.100320 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:55:13.100328 | orchestrator | 2026-02-28 00:55:13.100336 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2026-02-28 00:55:13.100344 | orchestrator | Saturday 28 February 2026 00:52:33 +0000 (0:00:00.335) 0:01:03.960 ***** 2026-02-28 00:55:13.100352 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:55:13.100360 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:55:13.100368 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:55:13.100376 | orchestrator | 2026-02-28 00:55:13.100384 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2026-02-28 00:55:13.100392 | orchestrator | Saturday 28 February 2026 00:52:33 +0000 (0:00:00.372) 0:01:04.333 ***** 2026-02-28 00:55:13.100400 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:55:13.100407 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:55:13.100415 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:55:13.100423 | orchestrator | 2026-02-28 00:55:13.100431 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2026-02-28 00:55:13.100439 | orchestrator | Saturday 28 February 2026 00:52:33 +0000 (0:00:00.355) 0:01:04.688 ***** 2026-02-28 00:55:13.100447 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:55:13.100455 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:55:13.100462 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:55:13.100470 | orchestrator | 2026-02-28 00:55:13.100478 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2026-02-28 00:55:13.100486 | orchestrator | Saturday 28 February 2026 00:52:34 +0000 (0:00:00.626) 0:01:05.314 ***** 2026-02-28 00:55:13.100516 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:55:13.100524 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:55:13.100532 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:55:13.100540 | orchestrator | 2026-02-28 00:55:13.100549 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-02-28 00:55:13.100557 | orchestrator | Saturday 28 February 2026 00:52:34 +0000 (0:00:00.452) 0:01:05.767 ***** 2026-02-28 00:55:13.100565 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:55:13.100573 | orchestrator | 2026-02-28 00:55:13.100587 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2026-02-28 00:55:13.100595 | orchestrator | Saturday 28 February 2026 00:52:35 +0000 (0:00:01.006) 0:01:06.773 ***** 2026-02-28 00:55:13.100603 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:55:13.100611 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:55:13.100619 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:55:13.100627 | orchestrator | 2026-02-28 00:55:13.100635 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2026-02-28 00:55:13.100643 | orchestrator | Saturday 28 February 2026 00:52:36 +0000 (0:00:00.687) 0:01:07.461 ***** 2026-02-28 00:55:13.100651 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:55:13.100659 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:55:13.100667 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:55:13.100675 | orchestrator | 2026-02-28 00:55:13.100683 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2026-02-28 00:55:13.100691 | orchestrator | Saturday 28 February 2026 00:52:36 +0000 (0:00:00.478) 0:01:07.939 ***** 2026-02-28 00:55:13.100708 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:55:13.100721 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:55:13.100735 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:55:13.100763 | orchestrator | 2026-02-28 00:55:13.100786 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2026-02-28 00:55:13.100798 | orchestrator | Saturday 28 February 2026 00:52:37 +0000 (0:00:00.391) 0:01:08.331 ***** 2026-02-28 00:55:13.100811 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:55:13.100825 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:55:13.100838 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:55:13.100852 | orchestrator | 2026-02-28 00:55:13.100873 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2026-02-28 00:55:13.100886 | orchestrator | Saturday 28 February 2026 00:52:37 +0000 (0:00:00.522) 0:01:08.853 ***** 2026-02-28 00:55:13.100894 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:55:13.100902 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:55:13.100910 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:55:13.100918 | orchestrator | 2026-02-28 00:55:13.100926 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2026-02-28 00:55:13.100934 | orchestrator | Saturday 28 February 2026 00:52:38 +0000 (0:00:00.800) 0:01:09.654 ***** 2026-02-28 00:55:13.100942 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:55:13.100950 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:55:13.100958 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:55:13.100966 | orchestrator | 2026-02-28 00:55:13.100974 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2026-02-28 00:55:13.100982 | orchestrator | Saturday 28 February 2026 00:52:39 +0000 (0:00:00.401) 0:01:10.055 ***** 2026-02-28 00:55:13.100990 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:55:13.100998 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:55:13.101006 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:55:13.101014 | orchestrator | 2026-02-28 00:55:13.101021 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2026-02-28 00:55:13.101030 | orchestrator | Saturday 28 February 2026 00:52:39 +0000 (0:00:00.372) 0:01:10.428 ***** 2026-02-28 00:55:13.101038 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:55:13.101046 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:55:13.101053 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:55:13.101061 | orchestrator | 2026-02-28 00:55:13.101069 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-02-28 00:55:13.101077 | orchestrator | Saturday 28 February 2026 00:52:39 +0000 (0:00:00.390) 0:01:10.819 ***** 2026-02-28 00:55:13.101087 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:55:13.101099 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:55:13.101107 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:55:13.101131 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:55:13.101141 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:55:13.101153 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:55:13.101162 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:55:13.101171 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:55:13.101180 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:55:13.101188 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:55:13.101197 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:55:13.101217 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:55:13.101228 | orchestrator | 2026-02-28 00:55:13.101242 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-02-28 00:55:13.101256 | orchestrator | Saturday 28 February 2026 00:52:43 +0000 (0:00:03.565) 0:01:14.385 ***** 2026-02-28 00:55:13.101269 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:55:13.101289 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:55:13.101304 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:55:13.101319 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:55:13.101332 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:55:13.101347 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:55:13.101363 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:55:13.101377 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:55:13.101386 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:55:13.101399 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:55:13.101408 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:55:13.101416 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:55:13.101424 | orchestrator | 2026-02-28 00:55:13.101432 | orchestrator | TASK [ovn-db : Ensure configuration for relays exists] ************************* 2026-02-28 00:55:13.101440 | orchestrator | Saturday 28 February 2026 00:52:48 +0000 (0:00:04.976) 0:01:19.361 ***** 2026-02-28 00:55:13.101449 | orchestrator | included: /ansible/roles/ovn-db/tasks/config-relay.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=1) 2026-02-28 00:55:13.101457 | orchestrator | 2026-02-28 00:55:13.101465 | orchestrator | TASK [ovn-db : Ensuring config directories exist for OVN relay containers] ***** 2026-02-28 00:55:13.101479 | orchestrator | Saturday 28 February 2026 00:52:48 +0000 (0:00:00.522) 0:01:19.884 ***** 2026-02-28 00:55:13.101487 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:55:13.101516 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:55:13.101524 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:55:13.101532 | orchestrator | 2026-02-28 00:55:13.101540 | orchestrator | TASK [ovn-db : Copying over config.json files for OVN relay services] ********** 2026-02-28 00:55:13.101548 | orchestrator | Saturday 28 February 2026 00:52:49 +0000 (0:00:00.624) 0:01:20.508 ***** 2026-02-28 00:55:13.101556 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:55:13.101564 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:55:13.101572 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:55:13.101580 | orchestrator | 2026-02-28 00:55:13.101587 | orchestrator | TASK [ovn-db : Generate config files for OVN relay services] ******************* 2026-02-28 00:55:13.101595 | orchestrator | Saturday 28 February 2026 00:52:51 +0000 (0:00:02.208) 0:01:22.717 ***** 2026-02-28 00:55:13.101603 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:55:13.101611 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:55:13.101619 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:55:13.101627 | orchestrator | 2026-02-28 00:55:13.101635 | orchestrator | TASK [service-check-containers : ovn_db | Check containers] ******************** 2026-02-28 00:55:13.101644 | orchestrator | Saturday 28 February 2026 00:52:53 +0000 (0:00:02.167) 0:01:24.884 ***** 2026-02-28 00:55:13.101658 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:55:13.101667 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:55:13.101680 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:55:13.101689 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:55:13.101697 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:55:13.101710 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:55:13.101719 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:55:13.101727 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:55:13.101741 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:55:13.101749 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:55:13.101758 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:55:13.101770 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:55:13.101778 | orchestrator | 2026-02-28 00:55:13.101786 | orchestrator | TASK [service-check-containers : ovn_db | Notify handlers to restart containers] *** 2026-02-28 00:55:13.101794 | orchestrator | Saturday 28 February 2026 00:52:58 +0000 (0:00:04.338) 0:01:29.223 ***** 2026-02-28 00:55:13.101803 | orchestrator | changed: [testbed-node-0] => { 2026-02-28 00:55:13.101816 | orchestrator |  "msg": "Notifying handlers" 2026-02-28 00:55:13.101825 | orchestrator | } 2026-02-28 00:55:13.101833 | orchestrator | changed: [testbed-node-1] => { 2026-02-28 00:55:13.101841 | orchestrator |  "msg": "Notifying handlers" 2026-02-28 00:55:13.101849 | orchestrator | } 2026-02-28 00:55:13.101859 | orchestrator | changed: [testbed-node-2] => { 2026-02-28 00:55:13.101873 | orchestrator |  "msg": "Notifying handlers" 2026-02-28 00:55:13.101886 | orchestrator | } 2026-02-28 00:55:13.101899 | orchestrator | 2026-02-28 00:55:13.101913 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-28 00:55:13.101925 | orchestrator | Saturday 28 February 2026 00:52:58 +0000 (0:00:00.408) 0:01:29.632 ***** 2026-02-28 00:55:13.101939 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:55:13.101955 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:55:13.101970 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:55:13.101993 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:55:13.102008 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:55:13.102063 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:55:13.102093 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:55:13.102107 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:55:13.102122 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:55:13.102137 | orchestrator | included: /ansible/roles/service-check-containers/tasks/iterated.yml for testbed-node-2, testbed-node-1, testbed-node-0 => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:55:13.102152 | orchestrator | 2026-02-28 00:55:13.102165 | orchestrator | TASK [service-check-containers : ovn_db | Check containers with iteration] ***** 2026-02-28 00:55:13.102179 | orchestrator | Saturday 28 February 2026 00:53:01 +0000 (0:00:02.437) 0:01:32.069 ***** 2026-02-28 00:55:13.102190 | orchestrator | changed: [testbed-node-0] => (item=[1]) 2026-02-28 00:55:13.102198 | orchestrator | changed: [testbed-node-2] => (item=[1]) 2026-02-28 00:55:13.102206 | orchestrator | changed: [testbed-node-1] => (item=[1]) 2026-02-28 00:55:13.102214 | orchestrator | 2026-02-28 00:55:13.102222 | orchestrator | TASK [service-check-containers : ovn_db | Notify handlers to restart containers] *** 2026-02-28 00:55:13.102230 | orchestrator | Saturday 28 February 2026 00:53:02 +0000 (0:00:00.930) 0:01:33.000 ***** 2026-02-28 00:55:13.102238 | orchestrator | changed: [testbed-node-0] => { 2026-02-28 00:55:13.102246 | orchestrator |  "msg": "Notifying handlers" 2026-02-28 00:55:13.102254 | orchestrator | } 2026-02-28 00:55:13.102262 | orchestrator | changed: [testbed-node-1] => { 2026-02-28 00:55:13.102270 | orchestrator |  "msg": "Notifying handlers" 2026-02-28 00:55:13.102278 | orchestrator | } 2026-02-28 00:55:13.102285 | orchestrator | changed: [testbed-node-2] => { 2026-02-28 00:55:13.102293 | orchestrator |  "msg": "Notifying handlers" 2026-02-28 00:55:13.102307 | orchestrator | } 2026-02-28 00:55:13.102316 | orchestrator | 2026-02-28 00:55:13.102324 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-02-28 00:55:13.102332 | orchestrator | Saturday 28 February 2026 00:53:02 +0000 (0:00:00.867) 0:01:33.867 ***** 2026-02-28 00:55:13.102340 | orchestrator | 2026-02-28 00:55:13.102348 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-02-28 00:55:13.102356 | orchestrator | Saturday 28 February 2026 00:53:02 +0000 (0:00:00.070) 0:01:33.938 ***** 2026-02-28 00:55:13.102364 | orchestrator | 2026-02-28 00:55:13.102372 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-02-28 00:55:13.102386 | orchestrator | Saturday 28 February 2026 00:53:03 +0000 (0:00:00.074) 0:01:34.012 ***** 2026-02-28 00:55:13.102394 | orchestrator | 2026-02-28 00:55:13.102402 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-02-28 00:55:13.102410 | orchestrator | Saturday 28 February 2026 00:53:03 +0000 (0:00:00.082) 0:01:34.094 ***** 2026-02-28 00:55:13.102418 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:55:13.102426 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:55:13.102434 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:55:13.102442 | orchestrator | 2026-02-28 00:55:13.102450 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-02-28 00:55:13.102458 | orchestrator | Saturday 28 February 2026 00:53:14 +0000 (0:00:11.273) 0:01:45.368 ***** 2026-02-28 00:55:13.102466 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:55:13.102474 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:55:13.102481 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:55:13.102538 | orchestrator | 2026-02-28 00:55:13.102555 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db-relay container] ******************* 2026-02-28 00:55:13.102564 | orchestrator | Saturday 28 February 2026 00:53:30 +0000 (0:00:15.625) 0:02:00.994 ***** 2026-02-28 00:55:13.102572 | orchestrator | changed: [testbed-node-1] => (item=1) 2026-02-28 00:55:13.102580 | orchestrator | changed: [testbed-node-0] => (item=1) 2026-02-28 00:55:13.102588 | orchestrator | changed: [testbed-node-2] => (item=1) 2026-02-28 00:55:13.102596 | orchestrator | 2026-02-28 00:55:13.102603 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-02-28 00:55:13.102611 | orchestrator | Saturday 28 February 2026 00:53:41 +0000 (0:00:11.018) 0:02:12.013 ***** 2026-02-28 00:55:13.102619 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:55:13.102627 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:55:13.102635 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:55:13.102643 | orchestrator | 2026-02-28 00:55:13.102651 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-02-28 00:55:13.102659 | orchestrator | Saturday 28 February 2026 00:53:54 +0000 (0:00:13.616) 0:02:25.629 ***** 2026-02-28 00:55:13.102667 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:55:13.102674 | orchestrator | 2026-02-28 00:55:13.102683 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-02-28 00:55:13.102691 | orchestrator | Saturday 28 February 2026 00:53:54 +0000 (0:00:00.123) 0:02:25.752 ***** 2026-02-28 00:55:13.102698 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:55:13.102707 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:55:13.102714 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:55:13.102722 | orchestrator | 2026-02-28 00:55:13.102730 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-02-28 00:55:13.102738 | orchestrator | Saturday 28 February 2026 00:53:55 +0000 (0:00:00.770) 0:02:26.523 ***** 2026-02-28 00:55:13.102746 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:55:13.102754 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:55:13.102762 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:55:13.102770 | orchestrator | 2026-02-28 00:55:13.102778 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-02-28 00:55:13.102786 | orchestrator | Saturday 28 February 2026 00:53:56 +0000 (0:00:00.619) 0:02:27.142 ***** 2026-02-28 00:55:13.102794 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:55:13.102802 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:55:13.102810 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:55:13.102817 | orchestrator | 2026-02-28 00:55:13.102825 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-02-28 00:55:13.102833 | orchestrator | Saturday 28 February 2026 00:53:57 +0000 (0:00:00.879) 0:02:28.022 ***** 2026-02-28 00:55:13.102841 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:55:13.102849 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:55:13.102857 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:55:13.102871 | orchestrator | 2026-02-28 00:55:13.102879 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-02-28 00:55:13.102887 | orchestrator | Saturday 28 February 2026 00:53:57 +0000 (0:00:00.550) 0:02:28.572 ***** 2026-02-28 00:55:13.102895 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:55:13.102903 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:55:13.102911 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:55:13.102919 | orchestrator | 2026-02-28 00:55:13.102927 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-02-28 00:55:13.102935 | orchestrator | Saturday 28 February 2026 00:53:58 +0000 (0:00:00.944) 0:02:29.517 ***** 2026-02-28 00:55:13.102943 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:55:13.102952 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:55:13.102960 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:55:13.102967 | orchestrator | 2026-02-28 00:55:13.102975 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db-relay] *************************************** 2026-02-28 00:55:13.102983 | orchestrator | Saturday 28 February 2026 00:53:59 +0000 (0:00:00.833) 0:02:30.350 ***** 2026-02-28 00:55:13.102991 | orchestrator | ok: [testbed-node-0] => (item=1) 2026-02-28 00:55:13.102999 | orchestrator | ok: [testbed-node-1] => (item=1) 2026-02-28 00:55:13.103007 | orchestrator | ok: [testbed-node-2] => (item=1) 2026-02-28 00:55:13.103015 | orchestrator | 2026-02-28 00:55:13.103023 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2026-02-28 00:55:13.103031 | orchestrator | Saturday 28 February 2026 00:54:01 +0000 (0:00:01.619) 0:02:31.970 ***** 2026-02-28 00:55:13.103039 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:55:13.103047 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:55:13.103055 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:55:13.103062 | orchestrator | 2026-02-28 00:55:13.103071 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-02-28 00:55:13.103085 | orchestrator | Saturday 28 February 2026 00:54:01 +0000 (0:00:00.387) 0:02:32.358 ***** 2026-02-28 00:55:13.103093 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:55:13.103104 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:55:13.103112 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:55:13.103119 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:55:13.103132 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:55:13.103139 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:55:13.103146 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:55:13.103157 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:55:13.103164 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:55:13.103171 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:55:13.103183 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:55:13.103190 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:55:13.103202 | orchestrator | 2026-02-28 00:55:13.103209 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-02-28 00:55:13.103216 | orchestrator | Saturday 28 February 2026 00:54:04 +0000 (0:00:03.482) 0:02:35.840 ***** 2026-02-28 00:55:13.103223 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:55:13.103230 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:55:13.103243 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:55:13.103261 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:55:13.103273 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:55:13.103286 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:55:13.103298 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:55:13.103313 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:55:13.103325 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:55:13.103338 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:55:13.103350 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:55:13.103370 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:55:13.103382 | orchestrator | 2026-02-28 00:55:13.103395 | orchestrator | TASK [ovn-db : Ensure configuration for relays exists] ************************* 2026-02-28 00:55:13.103403 | orchestrator | Saturday 28 February 2026 00:54:11 +0000 (0:00:06.740) 0:02:42.581 ***** 2026-02-28 00:55:13.103410 | orchestrator | included: /ansible/roles/ovn-db/tasks/config-relay.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=1) 2026-02-28 00:55:13.103417 | orchestrator | 2026-02-28 00:55:13.103424 | orchestrator | TASK [ovn-db : Ensuring config directories exist for OVN relay containers] ***** 2026-02-28 00:55:13.103430 | orchestrator | Saturday 28 February 2026 00:54:12 +0000 (0:00:00.786) 0:02:43.368 ***** 2026-02-28 00:55:13.103437 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:55:13.103444 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:55:13.103450 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:55:13.103457 | orchestrator | 2026-02-28 00:55:13.103464 | orchestrator | TASK [ovn-db : Copying over config.json files for OVN relay services] ********** 2026-02-28 00:55:13.103471 | orchestrator | Saturday 28 February 2026 00:54:13 +0000 (0:00:00.799) 0:02:44.167 ***** 2026-02-28 00:55:13.103478 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:55:13.103485 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:55:13.103508 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:55:13.103515 | orchestrator | 2026-02-28 00:55:13.103528 | orchestrator | TASK [ovn-db : Generate config files for OVN relay services] ******************* 2026-02-28 00:55:13.103535 | orchestrator | Saturday 28 February 2026 00:54:14 +0000 (0:00:01.766) 0:02:45.933 ***** 2026-02-28 00:55:13.103541 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:55:13.103598 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:55:13.103614 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:55:13.103621 | orchestrator | 2026-02-28 00:55:13.103632 | orchestrator | TASK [service-check-containers : ovn_db | Check containers] ******************** 2026-02-28 00:55:13.103639 | orchestrator | Saturday 28 February 2026 00:54:16 +0000 (0:00:01.952) 0:02:47.885 ***** 2026-02-28 00:55:13.103647 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:55:13.103655 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:55:13.103662 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:55:13.103669 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:55:13.103676 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:55:13.103690 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:55:13.103697 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:55:13.103715 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:55:13.103722 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:55:13.103730 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:55:13.103737 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:55:13.103744 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:55:13.103751 | orchestrator | 2026-02-28 00:55:13.103758 | orchestrator | TASK [service-check-containers : ovn_db | Notify handlers to restart containers] *** 2026-02-28 00:55:13.103765 | orchestrator | Saturday 28 February 2026 00:54:22 +0000 (0:00:05.075) 0:02:52.961 ***** 2026-02-28 00:55:13.103772 | orchestrator | ok: [testbed-node-0] => { 2026-02-28 00:55:13.103779 | orchestrator |  "msg": "Notifying handlers" 2026-02-28 00:55:13.103786 | orchestrator | } 2026-02-28 00:55:13.103793 | orchestrator | changed: [testbed-node-1] => { 2026-02-28 00:55:13.103800 | orchestrator |  "msg": "Notifying handlers" 2026-02-28 00:55:13.103806 | orchestrator | } 2026-02-28 00:55:13.103813 | orchestrator | changed: [testbed-node-2] => { 2026-02-28 00:55:13.103820 | orchestrator |  "msg": "Notifying handlers" 2026-02-28 00:55:13.103827 | orchestrator | } 2026-02-28 00:55:13.103834 | orchestrator | 2026-02-28 00:55:13.103840 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-28 00:55:13.103847 | orchestrator | Saturday 28 February 2026 00:54:22 +0000 (0:00:00.449) 0:02:53.411 ***** 2026-02-28 00:55:13.103859 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:55:13.103871 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:55:13.103882 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:55:13.103889 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:55:13.103896 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:55:13.103903 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:55:13.103910 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:55:13.103917 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:55:13.104071 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:55:13.104084 | orchestrator | included: /ansible/roles/service-check-containers/tasks/iterated.yml for testbed-node-2, testbed-node-1, testbed-node-0 => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:55:13.104091 | orchestrator | 2026-02-28 00:55:13.104098 | orchestrator | TASK [service-check-containers : ovn_db | Check containers with iteration] ***** 2026-02-28 00:55:13.104105 | orchestrator | Saturday 28 February 2026 00:54:25 +0000 (0:00:03.213) 0:02:56.624 ***** 2026-02-28 00:55:13.104116 | orchestrator | changed: [testbed-node-0] => (item=[1]) 2026-02-28 00:55:13.104123 | orchestrator | changed: [testbed-node-1] => (item=[1]) 2026-02-28 00:55:13.104130 | orchestrator | changed: [testbed-node-2] => (item=[1]) 2026-02-28 00:55:13.104136 | orchestrator | 2026-02-28 00:55:13.104143 | orchestrator | TASK [service-check-containers : ovn_db | Notify handlers to restart containers] *** 2026-02-28 00:55:13.104150 | orchestrator | Saturday 28 February 2026 00:54:27 +0000 (0:00:01.505) 0:02:58.130 ***** 2026-02-28 00:55:13.104156 | orchestrator | changed: [testbed-node-0] => { 2026-02-28 00:55:13.104163 | orchestrator |  "msg": "Notifying handlers" 2026-02-28 00:55:13.104170 | orchestrator | } 2026-02-28 00:55:13.104177 | orchestrator | changed: [testbed-node-1] => { 2026-02-28 00:55:13.104183 | orchestrator |  "msg": "Notifying handlers" 2026-02-28 00:55:13.104190 | orchestrator | } 2026-02-28 00:55:13.104197 | orchestrator | changed: [testbed-node-2] => { 2026-02-28 00:55:13.104203 | orchestrator |  "msg": "Notifying handlers" 2026-02-28 00:55:13.104210 | orchestrator | } 2026-02-28 00:55:13.104217 | orchestrator | 2026-02-28 00:55:13.104223 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-02-28 00:55:13.104230 | orchestrator | Saturday 28 February 2026 00:54:27 +0000 (0:00:00.599) 0:02:58.729 ***** 2026-02-28 00:55:13.104237 | orchestrator | 2026-02-28 00:55:13.104243 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-02-28 00:55:13.104250 | orchestrator | Saturday 28 February 2026 00:54:27 +0000 (0:00:00.070) 0:02:58.800 ***** 2026-02-28 00:55:13.104257 | orchestrator | 2026-02-28 00:55:13.104264 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-02-28 00:55:13.104270 | orchestrator | Saturday 28 February 2026 00:54:27 +0000 (0:00:00.070) 0:02:58.870 ***** 2026-02-28 00:55:13.104277 | orchestrator | 2026-02-28 00:55:13.104284 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-02-28 00:55:13.104291 | orchestrator | Saturday 28 February 2026 00:54:27 +0000 (0:00:00.068) 0:02:58.939 ***** 2026-02-28 00:55:13.104297 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:55:13.104304 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:55:13.104311 | orchestrator | 2026-02-28 00:55:13.104317 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-02-28 00:55:13.104324 | orchestrator | Saturday 28 February 2026 00:54:40 +0000 (0:00:12.682) 0:03:11.622 ***** 2026-02-28 00:55:13.104331 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:55:13.104344 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:55:13.104350 | orchestrator | 2026-02-28 00:55:13.104357 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db-relay container] ******************* 2026-02-28 00:55:13.104364 | orchestrator | Saturday 28 February 2026 00:54:52 +0000 (0:00:12.289) 0:03:23.911 ***** 2026-02-28 00:55:13.104371 | orchestrator | changed: [testbed-node-0] => (item=1) 2026-02-28 00:55:13.104382 | orchestrator | changed: [testbed-node-1] => (item=1) 2026-02-28 00:55:13.104394 | orchestrator | changed: [testbed-node-2] => (item=1) 2026-02-28 00:55:13.104404 | orchestrator | 2026-02-28 00:55:13.104416 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-02-28 00:55:13.104426 | orchestrator | Saturday 28 February 2026 00:55:05 +0000 (0:00:12.472) 0:03:36.383 ***** 2026-02-28 00:55:13.104438 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:55:13.104447 | orchestrator | 2026-02-28 00:55:13.104458 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-02-28 00:55:13.104469 | orchestrator | Saturday 28 February 2026 00:55:05 +0000 (0:00:00.126) 0:03:36.510 ***** 2026-02-28 00:55:13.104479 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:55:13.104506 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:55:13.104517 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:55:13.104527 | orchestrator | 2026-02-28 00:55:13.104538 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-02-28 00:55:13.104550 | orchestrator | Saturday 28 February 2026 00:55:06 +0000 (0:00:00.755) 0:03:37.265 ***** 2026-02-28 00:55:13.104561 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:55:13.104570 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:55:13.104580 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:55:13.104591 | orchestrator | 2026-02-28 00:55:13.104601 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-02-28 00:55:13.104612 | orchestrator | Saturday 28 February 2026 00:55:06 +0000 (0:00:00.608) 0:03:37.874 ***** 2026-02-28 00:55:13.104623 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:55:13.104634 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:55:13.104646 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:55:13.104658 | orchestrator | 2026-02-28 00:55:13.104669 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-02-28 00:55:13.104691 | orchestrator | Saturday 28 February 2026 00:55:08 +0000 (0:00:01.135) 0:03:39.009 ***** 2026-02-28 00:55:13.104703 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:55:13.104715 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:55:13.104726 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:55:13.104736 | orchestrator | 2026-02-28 00:55:13.104745 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-02-28 00:55:13.104753 | orchestrator | Saturday 28 February 2026 00:55:08 +0000 (0:00:00.742) 0:03:39.752 ***** 2026-02-28 00:55:13.104760 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:55:13.104768 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:55:13.104775 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:55:13.104783 | orchestrator | 2026-02-28 00:55:13.104790 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-02-28 00:55:13.104798 | orchestrator | Saturday 28 February 2026 00:55:09 +0000 (0:00:00.809) 0:03:40.561 ***** 2026-02-28 00:55:13.104806 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:55:13.104813 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:55:13.104821 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:55:13.104828 | orchestrator | 2026-02-28 00:55:13.104836 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db-relay] *************************************** 2026-02-28 00:55:13.104843 | orchestrator | Saturday 28 February 2026 00:55:10 +0000 (0:00:01.185) 0:03:41.747 ***** 2026-02-28 00:55:13.104851 | orchestrator | ok: [testbed-node-0] => (item=1) 2026-02-28 00:55:13.104859 | orchestrator | ok: [testbed-node-1] => (item=1) 2026-02-28 00:55:13.104866 | orchestrator | ok: [testbed-node-2] => (item=1) 2026-02-28 00:55:13.104874 | orchestrator | 2026-02-28 00:55:13.104882 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-28 00:55:13.104906 | orchestrator | testbed-node-0 : ok=65  changed=29  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-02-28 00:55:13.104915 | orchestrator | testbed-node-1 : ok=63  changed=30  unreachable=0 failed=0 skipped=23  rescued=0 ignored=0 2026-02-28 00:55:13.104923 | orchestrator | testbed-node-2 : ok=63  changed=30  unreachable=0 failed=0 skipped=23  rescued=0 ignored=0 2026-02-28 00:55:13.104931 | orchestrator | testbed-node-3 : ok=13  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-28 00:55:13.104938 | orchestrator | testbed-node-4 : ok=13  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-28 00:55:13.104946 | orchestrator | testbed-node-5 : ok=13  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-28 00:55:13.104954 | orchestrator | 2026-02-28 00:55:13.104962 | orchestrator | 2026-02-28 00:55:13.104970 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-28 00:55:13.104977 | orchestrator | Saturday 28 February 2026 00:55:12 +0000 (0:00:01.349) 0:03:43.096 ***** 2026-02-28 00:55:13.104985 | orchestrator | =============================================================================== 2026-02-28 00:55:13.104992 | orchestrator | ovn-db : Restart ovn-sb-db container ----------------------------------- 27.91s 2026-02-28 00:55:13.105000 | orchestrator | ovn-db : Restart ovn-nb-db container ----------------------------------- 23.96s 2026-02-28 00:55:13.105008 | orchestrator | ovn-db : Restart ovn-sb-db-relay container ----------------------------- 23.49s 2026-02-28 00:55:13.105016 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 23.45s 2026-02-28 00:55:13.105024 | orchestrator | ovn-db : Restart ovn-northd container ---------------------------------- 13.62s 2026-02-28 00:55:13.105031 | orchestrator | ovn-controller : Restart ovn-controller container ----------------------- 9.17s 2026-02-28 00:55:13.105039 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 6.74s 2026-02-28 00:55:13.105046 | orchestrator | service-check-containers : ovn_db | Check containers -------------------- 5.08s 2026-02-28 00:55:13.105053 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.98s 2026-02-28 00:55:13.105060 | orchestrator | service-check-containers : ovn_db | Check containers -------------------- 4.34s 2026-02-28 00:55:13.105066 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 3.57s 2026-02-28 00:55:13.105073 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 3.48s 2026-02-28 00:55:13.105080 | orchestrator | service-check-containers : Include tasks -------------------------------- 3.21s 2026-02-28 00:55:13.105087 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 2.99s 2026-02-28 00:55:13.105093 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 2.96s 2026-02-28 00:55:13.105100 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 2.55s 2026-02-28 00:55:13.105107 | orchestrator | ovn-controller : Reload systemd config ---------------------------------- 2.52s 2026-02-28 00:55:13.105113 | orchestrator | service-check-containers : Include tasks -------------------------------- 2.44s 2026-02-28 00:55:13.105120 | orchestrator | service-check-containers : ovn_controller | Check containers ------------ 2.36s 2026-02-28 00:55:13.105127 | orchestrator | ovn-db : Copying over config.json files for OVN relay services ---------- 2.21s 2026-02-28 00:55:13.105134 | orchestrator | 2026-02-28 00:55:13 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:55:13.105141 | orchestrator | 2026-02-28 00:55:13 | INFO  | Task 9360a8a3-309e-413e-b1d7-52e10f7ee700 is in state STARTED 2026-02-28 00:55:13.105152 | orchestrator | 2026-02-28 00:55:13 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:55:16.133664 | orchestrator | 2026-02-28 00:55:16 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:55:16.134265 | orchestrator | 2026-02-28 00:55:16 | INFO  | Task 9360a8a3-309e-413e-b1d7-52e10f7ee700 is in state STARTED 2026-02-28 00:55:16.134291 | orchestrator | 2026-02-28 00:55:16 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:55:19.182953 | orchestrator | 2026-02-28 00:55:19 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:55:19.183722 | orchestrator | 2026-02-28 00:55:19 | INFO  | Task 9360a8a3-309e-413e-b1d7-52e10f7ee700 is in state STARTED 2026-02-28 00:55:19.183765 | orchestrator | 2026-02-28 00:55:19 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:55:22.223465 | orchestrator | 2026-02-28 00:55:22 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:55:22.225347 | orchestrator | 2026-02-28 00:55:22 | INFO  | Task 9360a8a3-309e-413e-b1d7-52e10f7ee700 is in state STARTED 2026-02-28 00:55:22.225412 | orchestrator | 2026-02-28 00:55:22 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:55:25.270743 | orchestrator | 2026-02-28 00:55:25 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:55:25.270949 | orchestrator | 2026-02-28 00:55:25 | INFO  | Task 9360a8a3-309e-413e-b1d7-52e10f7ee700 is in state STARTED 2026-02-28 00:55:25.271483 | orchestrator | 2026-02-28 00:55:25 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:55:28.326494 | orchestrator | 2026-02-28 00:55:28 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:55:28.330604 | orchestrator | 2026-02-28 00:55:28 | INFO  | Task 9360a8a3-309e-413e-b1d7-52e10f7ee700 is in state STARTED 2026-02-28 00:55:28.330686 | orchestrator | 2026-02-28 00:55:28 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:55:31.389168 | orchestrator | 2026-02-28 00:55:31 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:55:31.390131 | orchestrator | 2026-02-28 00:55:31 | INFO  | Task 9360a8a3-309e-413e-b1d7-52e10f7ee700 is in state STARTED 2026-02-28 00:55:31.390162 | orchestrator | 2026-02-28 00:55:31 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:55:34.437132 | orchestrator | 2026-02-28 00:55:34 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:55:34.438433 | orchestrator | 2026-02-28 00:55:34 | INFO  | Task 9360a8a3-309e-413e-b1d7-52e10f7ee700 is in state STARTED 2026-02-28 00:55:34.438474 | orchestrator | 2026-02-28 00:55:34 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:55:37.491240 | orchestrator | 2026-02-28 00:55:37 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:55:37.494090 | orchestrator | 2026-02-28 00:55:37 | INFO  | Task 9360a8a3-309e-413e-b1d7-52e10f7ee700 is in state STARTED 2026-02-28 00:55:37.494150 | orchestrator | 2026-02-28 00:55:37 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:55:40.538834 | orchestrator | 2026-02-28 00:55:40 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:55:40.539913 | orchestrator | 2026-02-28 00:55:40 | INFO  | Task 9360a8a3-309e-413e-b1d7-52e10f7ee700 is in state STARTED 2026-02-28 00:55:40.539947 | orchestrator | 2026-02-28 00:55:40 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:55:43.625349 | orchestrator | 2026-02-28 00:55:43 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:55:43.625649 | orchestrator | 2026-02-28 00:55:43 | INFO  | Task 9360a8a3-309e-413e-b1d7-52e10f7ee700 is in state STARTED 2026-02-28 00:55:43.625782 | orchestrator | 2026-02-28 00:55:43 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:55:46.677131 | orchestrator | 2026-02-28 00:55:46 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:55:46.679692 | orchestrator | 2026-02-28 00:55:46 | INFO  | Task 9360a8a3-309e-413e-b1d7-52e10f7ee700 is in state STARTED 2026-02-28 00:55:46.679750 | orchestrator | 2026-02-28 00:55:46 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:55:49.736972 | orchestrator | 2026-02-28 00:55:49 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:55:49.738223 | orchestrator | 2026-02-28 00:55:49 | INFO  | Task 9360a8a3-309e-413e-b1d7-52e10f7ee700 is in state STARTED 2026-02-28 00:55:49.738271 | orchestrator | 2026-02-28 00:55:49 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:55:52.774155 | orchestrator | 2026-02-28 00:55:52 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:55:52.775116 | orchestrator | 2026-02-28 00:55:52 | INFO  | Task 9360a8a3-309e-413e-b1d7-52e10f7ee700 is in state STARTED 2026-02-28 00:55:52.775412 | orchestrator | 2026-02-28 00:55:52 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:55:55.821211 | orchestrator | 2026-02-28 00:55:55 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:55:55.823008 | orchestrator | 2026-02-28 00:55:55 | INFO  | Task 9360a8a3-309e-413e-b1d7-52e10f7ee700 is in state STARTED 2026-02-28 00:55:55.823067 | orchestrator | 2026-02-28 00:55:55 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:55:58.871285 | orchestrator | 2026-02-28 00:55:58 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:55:58.872488 | orchestrator | 2026-02-28 00:55:58 | INFO  | Task 9360a8a3-309e-413e-b1d7-52e10f7ee700 is in state STARTED 2026-02-28 00:55:58.872953 | orchestrator | 2026-02-28 00:55:58 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:56:01.923223 | orchestrator | 2026-02-28 00:56:01 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:56:01.925326 | orchestrator | 2026-02-28 00:56:01 | INFO  | Task 9360a8a3-309e-413e-b1d7-52e10f7ee700 is in state STARTED 2026-02-28 00:56:01.925420 | orchestrator | 2026-02-28 00:56:01 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:56:04.977042 | orchestrator | 2026-02-28 00:56:04 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:56:04.977161 | orchestrator | 2026-02-28 00:56:04 | INFO  | Task 9360a8a3-309e-413e-b1d7-52e10f7ee700 is in state STARTED 2026-02-28 00:56:04.977178 | orchestrator | 2026-02-28 00:56:04 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:56:08.007370 | orchestrator | 2026-02-28 00:56:08 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:56:08.008198 | orchestrator | 2026-02-28 00:56:08 | INFO  | Task 9360a8a3-309e-413e-b1d7-52e10f7ee700 is in state STARTED 2026-02-28 00:56:08.010650 | orchestrator | 2026-02-28 00:56:08 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:56:11.053194 | orchestrator | 2026-02-28 00:56:11 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:56:11.054500 | orchestrator | 2026-02-28 00:56:11 | INFO  | Task 9360a8a3-309e-413e-b1d7-52e10f7ee700 is in state STARTED 2026-02-28 00:56:11.054703 | orchestrator | 2026-02-28 00:56:11 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:56:14.093834 | orchestrator | 2026-02-28 00:56:14 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:56:14.094432 | orchestrator | 2026-02-28 00:56:14 | INFO  | Task 9360a8a3-309e-413e-b1d7-52e10f7ee700 is in state STARTED 2026-02-28 00:56:14.094469 | orchestrator | 2026-02-28 00:56:14 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:56:17.142981 | orchestrator | 2026-02-28 00:56:17 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:56:17.144267 | orchestrator | 2026-02-28 00:56:17 | INFO  | Task 9360a8a3-309e-413e-b1d7-52e10f7ee700 is in state STARTED 2026-02-28 00:56:17.146467 | orchestrator | 2026-02-28 00:56:17 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:56:20.199961 | orchestrator | 2026-02-28 00:56:20 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:56:20.201442 | orchestrator | 2026-02-28 00:56:20 | INFO  | Task 9360a8a3-309e-413e-b1d7-52e10f7ee700 is in state STARTED 2026-02-28 00:56:20.201461 | orchestrator | 2026-02-28 00:56:20 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:56:23.237371 | orchestrator | 2026-02-28 00:56:23 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:56:23.237471 | orchestrator | 2026-02-28 00:56:23 | INFO  | Task 9360a8a3-309e-413e-b1d7-52e10f7ee700 is in state STARTED 2026-02-28 00:56:23.237485 | orchestrator | 2026-02-28 00:56:23 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:56:26.286343 | orchestrator | 2026-02-28 00:56:26 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:56:26.286434 | orchestrator | 2026-02-28 00:56:26 | INFO  | Task 9360a8a3-309e-413e-b1d7-52e10f7ee700 is in state STARTED 2026-02-28 00:56:26.286450 | orchestrator | 2026-02-28 00:56:26 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:56:29.320044 | orchestrator | 2026-02-28 00:56:29 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:56:29.323203 | orchestrator | 2026-02-28 00:56:29 | INFO  | Task 9360a8a3-309e-413e-b1d7-52e10f7ee700 is in state STARTED 2026-02-28 00:56:29.323304 | orchestrator | 2026-02-28 00:56:29 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:56:32.369042 | orchestrator | 2026-02-28 00:56:32 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:56:32.372048 | orchestrator | 2026-02-28 00:56:32 | INFO  | Task 9360a8a3-309e-413e-b1d7-52e10f7ee700 is in state STARTED 2026-02-28 00:56:32.372108 | orchestrator | 2026-02-28 00:56:32 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:56:35.411622 | orchestrator | 2026-02-28 00:56:35 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:56:35.412915 | orchestrator | 2026-02-28 00:56:35 | INFO  | Task 9360a8a3-309e-413e-b1d7-52e10f7ee700 is in state STARTED 2026-02-28 00:56:35.413624 | orchestrator | 2026-02-28 00:56:35 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:56:38.458673 | orchestrator | 2026-02-28 00:56:38 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:56:38.461106 | orchestrator | 2026-02-28 00:56:38 | INFO  | Task 9360a8a3-309e-413e-b1d7-52e10f7ee700 is in state STARTED 2026-02-28 00:56:38.461147 | orchestrator | 2026-02-28 00:56:38 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:56:41.506164 | orchestrator | 2026-02-28 00:56:41 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:56:41.508789 | orchestrator | 2026-02-28 00:56:41 | INFO  | Task 9360a8a3-309e-413e-b1d7-52e10f7ee700 is in state STARTED 2026-02-28 00:56:41.508854 | orchestrator | 2026-02-28 00:56:41 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:56:44.548458 | orchestrator | 2026-02-28 00:56:44 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:56:44.549153 | orchestrator | 2026-02-28 00:56:44 | INFO  | Task 9360a8a3-309e-413e-b1d7-52e10f7ee700 is in state STARTED 2026-02-28 00:56:44.549459 | orchestrator | 2026-02-28 00:56:44 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:56:47.588478 | orchestrator | 2026-02-28 00:56:47 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:56:47.588624 | orchestrator | 2026-02-28 00:56:47 | INFO  | Task 9360a8a3-309e-413e-b1d7-52e10f7ee700 is in state STARTED 2026-02-28 00:56:47.588641 | orchestrator | 2026-02-28 00:56:47 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:56:50.646053 | orchestrator | 2026-02-28 00:56:50 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:56:50.647074 | orchestrator | 2026-02-28 00:56:50 | INFO  | Task 9360a8a3-309e-413e-b1d7-52e10f7ee700 is in state STARTED 2026-02-28 00:56:50.647124 | orchestrator | 2026-02-28 00:56:50 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:56:53.690610 | orchestrator | 2026-02-28 00:56:53 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:56:53.691442 | orchestrator | 2026-02-28 00:56:53 | INFO  | Task 9360a8a3-309e-413e-b1d7-52e10f7ee700 is in state STARTED 2026-02-28 00:56:53.691473 | orchestrator | 2026-02-28 00:56:53 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:56:56.738194 | orchestrator | 2026-02-28 00:56:56 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:56:56.738311 | orchestrator | 2026-02-28 00:56:56 | INFO  | Task 9360a8a3-309e-413e-b1d7-52e10f7ee700 is in state STARTED 2026-02-28 00:56:56.738973 | orchestrator | 2026-02-28 00:56:56 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:56:59.774839 | orchestrator | 2026-02-28 00:56:59 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:56:59.775030 | orchestrator | 2026-02-28 00:56:59 | INFO  | Task 9360a8a3-309e-413e-b1d7-52e10f7ee700 is in state STARTED 2026-02-28 00:56:59.775055 | orchestrator | 2026-02-28 00:56:59 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:57:02.815907 | orchestrator | 2026-02-28 00:57:02 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:57:02.819404 | orchestrator | 2026-02-28 00:57:02 | INFO  | Task 9360a8a3-309e-413e-b1d7-52e10f7ee700 is in state STARTED 2026-02-28 00:57:02.820077 | orchestrator | 2026-02-28 00:57:02 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:57:05.854223 | orchestrator | 2026-02-28 00:57:05 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:57:05.855953 | orchestrator | 2026-02-28 00:57:05 | INFO  | Task 9360a8a3-309e-413e-b1d7-52e10f7ee700 is in state STARTED 2026-02-28 00:57:05.856497 | orchestrator | 2026-02-28 00:57:05 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:57:08.903496 | orchestrator | 2026-02-28 00:57:08 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:57:08.905101 | orchestrator | 2026-02-28 00:57:08 | INFO  | Task 9360a8a3-309e-413e-b1d7-52e10f7ee700 is in state STARTED 2026-02-28 00:57:08.905178 | orchestrator | 2026-02-28 00:57:08 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:57:11.957660 | orchestrator | 2026-02-28 00:57:11 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:57:11.959995 | orchestrator | 2026-02-28 00:57:11 | INFO  | Task 9360a8a3-309e-413e-b1d7-52e10f7ee700 is in state STARTED 2026-02-28 00:57:11.960092 | orchestrator | 2026-02-28 00:57:11 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:57:15.015917 | orchestrator | 2026-02-28 00:57:15 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:57:15.016081 | orchestrator | 2026-02-28 00:57:15 | INFO  | Task 9360a8a3-309e-413e-b1d7-52e10f7ee700 is in state STARTED 2026-02-28 00:57:15.016112 | orchestrator | 2026-02-28 00:57:15 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:57:18.061505 | orchestrator | 2026-02-28 00:57:18 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:57:18.063448 | orchestrator | 2026-02-28 00:57:18 | INFO  | Task 9360a8a3-309e-413e-b1d7-52e10f7ee700 is in state STARTED 2026-02-28 00:57:18.063488 | orchestrator | 2026-02-28 00:57:18 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:57:21.101957 | orchestrator | 2026-02-28 00:57:21 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:57:21.103842 | orchestrator | 2026-02-28 00:57:21 | INFO  | Task 9360a8a3-309e-413e-b1d7-52e10f7ee700 is in state STARTED 2026-02-28 00:57:21.104149 | orchestrator | 2026-02-28 00:57:21 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:57:24.145518 | orchestrator | 2026-02-28 00:57:24 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:57:24.146624 | orchestrator | 2026-02-28 00:57:24 | INFO  | Task 9360a8a3-309e-413e-b1d7-52e10f7ee700 is in state STARTED 2026-02-28 00:57:24.146683 | orchestrator | 2026-02-28 00:57:24 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:57:27.192561 | orchestrator | 2026-02-28 00:57:27 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:57:27.194323 | orchestrator | 2026-02-28 00:57:27 | INFO  | Task 9360a8a3-309e-413e-b1d7-52e10f7ee700 is in state STARTED 2026-02-28 00:57:27.194357 | orchestrator | 2026-02-28 00:57:27 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:57:30.234993 | orchestrator | 2026-02-28 00:57:30 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:57:30.238010 | orchestrator | 2026-02-28 00:57:30 | INFO  | Task 9360a8a3-309e-413e-b1d7-52e10f7ee700 is in state STARTED 2026-02-28 00:57:30.238153 | orchestrator | 2026-02-28 00:57:30 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:57:33.285002 | orchestrator | 2026-02-28 00:57:33 | INFO  | Task e56ad6c2-cc17-4c48-8b43-dbaf6972bef1 is in state STARTED 2026-02-28 00:57:33.287849 | orchestrator | 2026-02-28 00:57:33 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:57:33.294289 | orchestrator | 2026-02-28 00:57:33 | INFO  | Task 9360a8a3-309e-413e-b1d7-52e10f7ee700 is in state SUCCESS 2026-02-28 00:57:33.294508 | orchestrator | 2026-02-28 00:57:33.298009 | orchestrator | 2026-02-28 00:57:33.298164 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-28 00:57:33.298189 | orchestrator | 2026-02-28 00:57:33.298209 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-28 00:57:33.298229 | orchestrator | Saturday 28 February 2026 00:49:59 +0000 (0:00:00.708) 0:00:00.708 ***** 2026-02-28 00:57:33.298245 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:57:33.298258 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:57:33.298269 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:57:33.298280 | orchestrator | 2026-02-28 00:57:33.298291 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-28 00:57:33.298303 | orchestrator | Saturday 28 February 2026 00:50:00 +0000 (0:00:00.531) 0:00:01.239 ***** 2026-02-28 00:57:33.298314 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2026-02-28 00:57:33.298503 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2026-02-28 00:57:33.298519 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2026-02-28 00:57:33.298530 | orchestrator | 2026-02-28 00:57:33.298541 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2026-02-28 00:57:33.298553 | orchestrator | 2026-02-28 00:57:33.298564 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-02-28 00:57:33.298574 | orchestrator | Saturday 28 February 2026 00:50:00 +0000 (0:00:00.731) 0:00:01.971 ***** 2026-02-28 00:57:33.298615 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:57:33.298627 | orchestrator | 2026-02-28 00:57:33.298638 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2026-02-28 00:57:33.298649 | orchestrator | Saturday 28 February 2026 00:50:02 +0000 (0:00:01.131) 0:00:03.103 ***** 2026-02-28 00:57:33.298660 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:57:33.298671 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:57:33.298682 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:57:33.298699 | orchestrator | 2026-02-28 00:57:33.298717 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-02-28 00:57:33.298733 | orchestrator | Saturday 28 February 2026 00:50:03 +0000 (0:00:01.118) 0:00:04.221 ***** 2026-02-28 00:57:33.298751 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:57:33.298770 | orchestrator | 2026-02-28 00:57:33.298789 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2026-02-28 00:57:33.298804 | orchestrator | Saturday 28 February 2026 00:50:04 +0000 (0:00:01.109) 0:00:05.330 ***** 2026-02-28 00:57:33.298815 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:57:33.298826 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:57:33.298836 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:57:33.298847 | orchestrator | 2026-02-28 00:57:33.298868 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2026-02-28 00:57:33.298879 | orchestrator | Saturday 28 February 2026 00:50:05 +0000 (0:00:00.905) 0:00:06.236 ***** 2026-02-28 00:57:33.298890 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-02-28 00:57:33.298901 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-02-28 00:57:33.298912 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-02-28 00:57:33.298922 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-02-28 00:57:33.298933 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-02-28 00:57:33.298944 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-02-28 00:57:33.298956 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-02-28 00:57:33.298967 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-02-28 00:57:33.298978 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-02-28 00:57:33.298989 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-02-28 00:57:33.299000 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-02-28 00:57:33.299011 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-02-28 00:57:33.299021 | orchestrator | 2026-02-28 00:57:33.299032 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-02-28 00:57:33.299043 | orchestrator | Saturday 28 February 2026 00:50:09 +0000 (0:00:04.272) 0:00:10.508 ***** 2026-02-28 00:57:33.299054 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-02-28 00:57:33.299075 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-02-28 00:57:33.299086 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-02-28 00:57:33.299097 | orchestrator | 2026-02-28 00:57:33.299107 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-02-28 00:57:33.299118 | orchestrator | Saturday 28 February 2026 00:50:10 +0000 (0:00:01.304) 0:00:11.812 ***** 2026-02-28 00:57:33.299129 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-02-28 00:57:33.299144 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-02-28 00:57:33.299161 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-02-28 00:57:33.299179 | orchestrator | 2026-02-28 00:57:33.299196 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-02-28 00:57:33.299214 | orchestrator | Saturday 28 February 2026 00:50:13 +0000 (0:00:02.376) 0:00:14.189 ***** 2026-02-28 00:57:33.299233 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2026-02-28 00:57:33.299245 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:57:33.299276 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2026-02-28 00:57:33.299565 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:57:33.299578 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2026-02-28 00:57:33.299642 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:57:33.299661 | orchestrator | 2026-02-28 00:57:33.299681 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2026-02-28 00:57:33.299695 | orchestrator | Saturday 28 February 2026 00:50:14 +0000 (0:00:01.130) 0:00:15.319 ***** 2026-02-28 00:57:33.299710 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-02-28 00:57:33.299728 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-02-28 00:57:33.299749 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-02-28 00:57:33.299762 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-28 00:57:33.299787 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-28 00:57:33.299810 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-28 00:57:33.299823 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-28 00:57:33.299836 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-28 00:57:33.299847 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-28 00:57:33.299859 | orchestrator | 2026-02-28 00:57:33.299870 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2026-02-28 00:57:33.299881 | orchestrator | Saturday 28 February 2026 00:50:17 +0000 (0:00:02.934) 0:00:18.254 ***** 2026-02-28 00:57:33.299892 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:57:33.299903 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:57:33.299914 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:57:33.299925 | orchestrator | 2026-02-28 00:57:33.299936 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2026-02-28 00:57:33.299952 | orchestrator | Saturday 28 February 2026 00:50:19 +0000 (0:00:01.993) 0:00:20.248 ***** 2026-02-28 00:57:33.299964 | orchestrator | changed: [testbed-node-0] => (item=users) 2026-02-28 00:57:33.299974 | orchestrator | changed: [testbed-node-1] => (item=users) 2026-02-28 00:57:33.299985 | orchestrator | changed: [testbed-node-2] => (item=users) 2026-02-28 00:57:33.299996 | orchestrator | changed: [testbed-node-0] => (item=rules) 2026-02-28 00:57:33.300014 | orchestrator | changed: [testbed-node-1] => (item=rules) 2026-02-28 00:57:33.300025 | orchestrator | changed: [testbed-node-2] => (item=rules) 2026-02-28 00:57:33.300035 | orchestrator | 2026-02-28 00:57:33.300096 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2026-02-28 00:57:33.300108 | orchestrator | Saturday 28 February 2026 00:50:22 +0000 (0:00:02.987) 0:00:23.235 ***** 2026-02-28 00:57:33.300118 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:57:33.300129 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:57:33.300140 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:57:33.300151 | orchestrator | 2026-02-28 00:57:33.300161 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2026-02-28 00:57:33.300172 | orchestrator | Saturday 28 February 2026 00:50:23 +0000 (0:00:01.659) 0:00:24.894 ***** 2026-02-28 00:57:33.300183 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:57:33.300194 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:57:33.300205 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:57:33.300215 | orchestrator | 2026-02-28 00:57:33.300226 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2026-02-28 00:57:33.300237 | orchestrator | Saturday 28 February 2026 00:50:26 +0000 (0:00:03.084) 0:00:27.979 ***** 2026-02-28 00:57:33.300249 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-28 00:57:33.300270 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-28 00:57:33.300283 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-28 00:57:33.300296 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2025.1', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__565a81b71615174b131a38c326e03f7dec774609', '__omit_place_holder__565a81b71615174b131a38c326e03f7dec774609'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-28 00:57:33.300308 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:57:33.300403 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-28 00:57:33.300427 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-28 00:57:33.300438 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-28 00:57:33.300450 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2025.1', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__565a81b71615174b131a38c326e03f7dec774609', '__omit_place_holder__565a81b71615174b131a38c326e03f7dec774609'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-28 00:57:33.300461 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:57:33.300482 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-28 00:57:33.300494 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-28 00:57:33.300506 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-28 00:57:33.300530 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2025.1', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__565a81b71615174b131a38c326e03f7dec774609', '__omit_place_holder__565a81b71615174b131a38c326e03f7dec774609'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-28 00:57:33.300541 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:57:33.300552 | orchestrator | 2026-02-28 00:57:33.300563 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2026-02-28 00:57:33.300574 | orchestrator | Saturday 28 February 2026 00:50:28 +0000 (0:00:01.185) 0:00:29.165 ***** 2026-02-28 00:57:33.300617 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-02-28 00:57:33.300638 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-02-28 00:57:33.300670 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-28 00:57:33.300689 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-28 00:57:33.300705 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-02-28 00:57:33.300730 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2025.1', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__565a81b71615174b131a38c326e03f7dec774609', '__omit_place_holder__565a81b71615174b131a38c326e03f7dec774609'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-28 00:57:33.300742 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-28 00:57:33.300753 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-28 00:57:33.300765 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2025.1', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__565a81b71615174b131a38c326e03f7dec774609', '__omit_place_holder__565a81b71615174b131a38c326e03f7dec774609'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-28 00:57:33.300783 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-28 00:57:33.300795 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-28 00:57:33.300813 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2025.1', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__565a81b71615174b131a38c326e03f7dec774609', '__omit_place_holder__565a81b71615174b131a38c326e03f7dec774609'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-28 00:57:33.300824 | orchestrator | 2026-02-28 00:57:33.300835 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2026-02-28 00:57:33.300846 | orchestrator | Saturday 28 February 2026 00:50:33 +0000 (0:00:05.000) 0:00:34.165 ***** 2026-02-28 00:57:33.300868 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-02-28 00:57:33.300880 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-02-28 00:57:33.300892 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-02-28 00:57:33.300911 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-28 00:57:33.300923 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-28 00:57:33.300940 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-28 00:57:33.300956 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-28 00:57:33.300968 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-28 00:57:33.300979 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-28 00:57:33.300991 | orchestrator | 2026-02-28 00:57:33.301002 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2026-02-28 00:57:33.301013 | orchestrator | Saturday 28 February 2026 00:50:37 +0000 (0:00:04.580) 0:00:38.746 ***** 2026-02-28 00:57:33.301024 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-02-28 00:57:33.301036 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-02-28 00:57:33.301047 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-02-28 00:57:33.301057 | orchestrator | 2026-02-28 00:57:33.301193 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2026-02-28 00:57:33.301207 | orchestrator | Saturday 28 February 2026 00:50:41 +0000 (0:00:03.716) 0:00:42.463 ***** 2026-02-28 00:57:33.301218 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-02-28 00:57:33.301228 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-02-28 00:57:33.301239 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-02-28 00:57:33.301250 | orchestrator | 2026-02-28 00:57:33.301268 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2026-02-28 00:57:33.301286 | orchestrator | Saturday 28 February 2026 00:50:47 +0000 (0:00:06.317) 0:00:48.780 ***** 2026-02-28 00:57:33.301297 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:57:33.301308 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:57:33.301319 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:57:33.301330 | orchestrator | 2026-02-28 00:57:33.301341 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2026-02-28 00:57:33.301352 | orchestrator | Saturday 28 February 2026 00:50:49 +0000 (0:00:01.535) 0:00:50.318 ***** 2026-02-28 00:57:33.301369 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-02-28 00:57:33.301388 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-02-28 00:57:33.301407 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-02-28 00:57:33.301427 | orchestrator | 2026-02-28 00:57:33.301446 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2026-02-28 00:57:33.301466 | orchestrator | Saturday 28 February 2026 00:50:52 +0000 (0:00:03.334) 0:00:53.653 ***** 2026-02-28 00:57:33.301486 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-02-28 00:57:33.301506 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-02-28 00:57:33.301525 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-02-28 00:57:33.301541 | orchestrator | 2026-02-28 00:57:33.301552 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-02-28 00:57:33.301563 | orchestrator | Saturday 28 February 2026 00:50:55 +0000 (0:00:03.360) 0:00:57.013 ***** 2026-02-28 00:57:33.301574 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:57:33.301612 | orchestrator | 2026-02-28 00:57:33.301624 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2026-02-28 00:57:33.301635 | orchestrator | Saturday 28 February 2026 00:50:56 +0000 (0:00:00.927) 0:00:57.940 ***** 2026-02-28 00:57:33.301646 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2026-02-28 00:57:33.301663 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2026-02-28 00:57:33.301674 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2026-02-28 00:57:33.301685 | orchestrator | 2026-02-28 00:57:33.301696 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2026-02-28 00:57:33.301707 | orchestrator | Saturday 28 February 2026 00:50:59 +0000 (0:00:02.700) 0:01:00.641 ***** 2026-02-28 00:57:33.301718 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2026-02-28 00:57:33.301729 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2026-02-28 00:57:33.301740 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2026-02-28 00:57:33.301750 | orchestrator | 2026-02-28 00:57:33.301761 | orchestrator | TASK [loadbalancer : Copying over proxysql-cert.pem] *************************** 2026-02-28 00:57:33.301772 | orchestrator | Saturday 28 February 2026 00:51:02 +0000 (0:00:03.186) 0:01:03.828 ***** 2026-02-28 00:57:33.301783 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:57:33.301793 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:57:33.301804 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:57:33.301815 | orchestrator | 2026-02-28 00:57:33.301994 | orchestrator | TASK [loadbalancer : Copying over proxysql-key.pem] **************************** 2026-02-28 00:57:33.302008 | orchestrator | Saturday 28 February 2026 00:51:03 +0000 (0:00:00.477) 0:01:04.306 ***** 2026-02-28 00:57:33.302068 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:57:33.302082 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:57:33.302094 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:57:33.302116 | orchestrator | 2026-02-28 00:57:33.302128 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-02-28 00:57:33.302139 | orchestrator | Saturday 28 February 2026 00:51:03 +0000 (0:00:00.422) 0:01:04.728 ***** 2026-02-28 00:57:33.302152 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-02-28 00:57:33.302176 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-02-28 00:57:33.302189 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-02-28 00:57:33.302200 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-28 00:57:33.302218 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-28 00:57:33.302230 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-28 00:57:33.302249 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-28 00:57:33.302260 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-28 00:57:33.302278 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-28 00:57:33.302289 | orchestrator | 2026-02-28 00:57:33.302300 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-02-28 00:57:33.302311 | orchestrator | Saturday 28 February 2026 00:51:07 +0000 (0:00:03.834) 0:01:08.563 ***** 2026-02-28 00:57:33.302323 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-28 00:57:33.302334 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-28 00:57:33.302346 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-28 00:57:33.302358 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:57:33.302417 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-28 00:57:33.302437 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-28 00:57:33.302448 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-28 00:57:33.302460 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:57:33.302733 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-28 00:57:33.302782 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-28 00:57:33.302892 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-28 00:57:33.302904 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:57:33.302916 | orchestrator | 2026-02-28 00:57:33.302928 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-02-28 00:57:33.302940 | orchestrator | Saturday 28 February 2026 00:51:08 +0000 (0:00:01.336) 0:01:09.900 ***** 2026-02-28 00:57:33.302957 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-28 00:57:33.302980 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-28 00:57:33.302992 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-28 00:57:33.303003 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:57:33.303027 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-28 00:57:33.303040 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-28 00:57:33.303051 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-28 00:57:33.303068 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-28 00:57:33.303086 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:57:33.303097 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-28 00:57:33.303109 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-28 00:57:33.303120 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:57:33.303134 | orchestrator | 2026-02-28 00:57:33.303153 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2026-02-28 00:57:33.303172 | orchestrator | Saturday 28 February 2026 00:51:10 +0000 (0:00:01.738) 0:01:11.639 ***** 2026-02-28 00:57:33.303190 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-02-28 00:57:33.303210 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-02-28 00:57:33.303230 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-02-28 00:57:33.303248 | orchestrator | 2026-02-28 00:57:33.303263 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2026-02-28 00:57:33.303273 | orchestrator | Saturday 28 February 2026 00:51:12 +0000 (0:00:01.872) 0:01:13.511 ***** 2026-02-28 00:57:33.303285 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-02-28 00:57:33.303357 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-02-28 00:57:33.303371 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-02-28 00:57:33.303383 | orchestrator | 2026-02-28 00:57:33.303394 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2026-02-28 00:57:33.303405 | orchestrator | Saturday 28 February 2026 00:51:14 +0000 (0:00:02.008) 0:01:15.520 ***** 2026-02-28 00:57:33.303416 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-02-28 00:57:33.303427 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-02-28 00:57:33.303438 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-02-28 00:57:33.303449 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-02-28 00:57:33.303460 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:57:33.303471 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-02-28 00:57:33.303482 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-02-28 00:57:33.303493 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:57:33.303514 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:57:33.303525 | orchestrator | 2026-02-28 00:57:33.303536 | orchestrator | TASK [service-check-containers : loadbalancer | Check containers] ************** 2026-02-28 00:57:33.303547 | orchestrator | Saturday 28 February 2026 00:51:16 +0000 (0:00:02.112) 0:01:17.633 ***** 2026-02-28 00:57:33.303559 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-02-28 00:57:33.303577 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-02-28 00:57:33.303657 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-02-28 00:57:33.303670 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-28 00:57:33.303691 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-28 00:57:33.303703 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-28 00:57:33.303723 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-28 00:57:33.303741 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-28 00:57:33.303753 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-28 00:57:33.303764 | orchestrator | 2026-02-28 00:57:33.303775 | orchestrator | TASK [service-check-containers : loadbalancer | Notify handlers to restart containers] *** 2026-02-28 00:57:33.303787 | orchestrator | Saturday 28 February 2026 00:51:19 +0000 (0:00:02.756) 0:01:20.389 ***** 2026-02-28 00:57:33.303798 | orchestrator | changed: [testbed-node-0] => { 2026-02-28 00:57:33.303809 | orchestrator |  "msg": "Notifying handlers" 2026-02-28 00:57:33.303820 | orchestrator | } 2026-02-28 00:57:33.303832 | orchestrator | changed: [testbed-node-1] => { 2026-02-28 00:57:33.303843 | orchestrator |  "msg": "Notifying handlers" 2026-02-28 00:57:33.303854 | orchestrator | } 2026-02-28 00:57:33.303865 | orchestrator | changed: [testbed-node-2] => { 2026-02-28 00:57:33.303876 | orchestrator |  "msg": "Notifying handlers" 2026-02-28 00:57:33.303887 | orchestrator | } 2026-02-28 00:57:33.303934 | orchestrator | 2026-02-28 00:57:33.303945 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-28 00:57:33.303956 | orchestrator | Saturday 28 February 2026 00:51:19 +0000 (0:00:00.384) 0:01:20.774 ***** 2026-02-28 00:57:33.303967 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-28 00:57:33.303988 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-28 00:57:33.304112 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-28 00:57:33.304124 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:57:33.304134 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-28 00:57:33.304151 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-28 00:57:33.304162 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-28 00:57:33.304172 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:57:33.304187 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-28 00:57:33.304204 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-28 00:57:33.304233 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-28 00:57:33.304259 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:57:33.304274 | orchestrator | 2026-02-28 00:57:33.304284 | orchestrator | TASK [include_role : aodh] ***************************************************** 2026-02-28 00:57:33.304294 | orchestrator | Saturday 28 February 2026 00:51:21 +0000 (0:00:01.437) 0:01:22.211 ***** 2026-02-28 00:57:33.304304 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:57:33.304314 | orchestrator | 2026-02-28 00:57:33.304323 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2026-02-28 00:57:33.304333 | orchestrator | Saturday 28 February 2026 00:51:21 +0000 (0:00:00.755) 0:01:22.966 ***** 2026-02-28 00:57:33.304345 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2025.1', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-28 00:57:33.304364 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2025.1', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-28 00:57:33.304376 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2025.1', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-28 00:57:33.304387 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2025.1', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-28 00:57:33.304408 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2025.1', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-28 00:57:33.304436 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2025.1', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-28 00:57:33.304454 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2025.1', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-28 00:57:33.304483 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2025.1', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-28 00:57:33.304503 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2025.1', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-28 00:57:33.304521 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2025.1', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-28 00:57:33.304620 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2025.1', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-28 00:57:33.304647 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2025.1', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-28 00:57:33.304666 | orchestrator | 2026-02-28 00:57:33.304683 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2026-02-28 00:57:33.304698 | orchestrator | Saturday 28 February 2026 00:51:27 +0000 (0:00:05.256) 0:01:28.222 ***** 2026-02-28 00:57:33.304709 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2025.1', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-02-28 00:57:33.304727 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2025.1', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-28 00:57:33.304738 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2025.1', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-28 00:57:33.304827 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2025.1', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-28 00:57:33.304849 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:57:33.304871 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2025.1', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-02-28 00:57:33.304882 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2025.1', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-28 00:57:33.304892 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2025.1', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-28 00:57:33.304908 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2025.1', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-28 00:57:33.304978 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:57:33.304997 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2025.1', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-02-28 00:57:33.305059 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2025.1', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-28 00:57:33.305082 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2025.1', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-28 00:57:33.305093 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2025.1', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-28 00:57:33.305103 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:57:33.305113 | orchestrator | 2026-02-28 00:57:33.305123 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2026-02-28 00:57:33.305133 | orchestrator | Saturday 28 February 2026 00:51:28 +0000 (0:00:01.232) 0:01:29.455 ***** 2026-02-28 00:57:33.305144 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-02-28 00:57:33.305156 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-02-28 00:57:33.305167 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:57:33.305183 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-02-28 00:57:33.305194 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-02-28 00:57:33.305204 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:57:33.305229 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-02-28 00:57:33.305239 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-02-28 00:57:33.305256 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:57:33.305266 | orchestrator | 2026-02-28 00:57:33.305299 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2026-02-28 00:57:33.305310 | orchestrator | Saturday 28 February 2026 00:51:29 +0000 (0:00:01.053) 0:01:30.508 ***** 2026-02-28 00:57:33.305320 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:57:33.305330 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:57:33.305340 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:57:33.305349 | orchestrator | 2026-02-28 00:57:33.305359 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2026-02-28 00:57:33.305369 | orchestrator | Saturday 28 February 2026 00:51:31 +0000 (0:00:01.771) 0:01:32.279 ***** 2026-02-28 00:57:33.305379 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:57:33.305388 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:57:33.305497 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:57:33.305508 | orchestrator | 2026-02-28 00:57:33.305518 | orchestrator | TASK [include_role : barbican] ************************************************* 2026-02-28 00:57:33.305564 | orchestrator | Saturday 28 February 2026 00:51:33 +0000 (0:00:02.098) 0:01:34.378 ***** 2026-02-28 00:57:33.305576 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:57:33.305613 | orchestrator | 2026-02-28 00:57:33.305627 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2026-02-28 00:57:33.305638 | orchestrator | Saturday 28 February 2026 00:51:34 +0000 (0:00:01.100) 0:01:35.479 ***** 2026-02-28 00:57:33.305657 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-28 00:57:33.305671 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-28 00:57:33.305682 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-28 00:57:33.305709 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-28 00:57:33.305768 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-28 00:57:33.305789 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-28 00:57:33.305800 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-28 00:57:33.305811 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-28 00:57:33.305834 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-28 00:57:33.305845 | orchestrator | 2026-02-28 00:57:33.305855 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2026-02-28 00:57:33.305865 | orchestrator | Saturday 28 February 2026 00:51:41 +0000 (0:00:07.538) 0:01:43.018 ***** 2026-02-28 00:57:33.305875 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-28 00:57:33.305934 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-28 00:57:33.305946 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-28 00:57:33.305956 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:57:33.305967 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-28 00:57:33.305990 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-28 00:57:33.306001 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-28 00:57:33.306011 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:57:33.306066 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-28 00:57:33.306081 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-28 00:57:33.306098 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-28 00:57:33.306122 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:57:33.306139 | orchestrator | 2026-02-28 00:57:33.306155 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2026-02-28 00:57:33.306171 | orchestrator | Saturday 28 February 2026 00:51:43 +0000 (0:00:01.282) 0:01:44.300 ***** 2026-02-28 00:57:33.306190 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-28 00:57:33.306282 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-28 00:57:33.306300 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-28 00:57:33.306311 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:57:33.306321 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-28 00:57:33.306332 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:57:33.306341 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-28 00:57:33.306352 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-28 00:57:33.306361 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:57:33.306375 | orchestrator | 2026-02-28 00:57:33.306391 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2026-02-28 00:57:33.306409 | orchestrator | Saturday 28 February 2026 00:51:44 +0000 (0:00:01.562) 0:01:45.863 ***** 2026-02-28 00:57:33.306426 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:57:33.306442 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:57:33.306458 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:57:33.306469 | orchestrator | 2026-02-28 00:57:33.306479 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2026-02-28 00:57:33.306489 | orchestrator | Saturday 28 February 2026 00:51:46 +0000 (0:00:01.396) 0:01:47.259 ***** 2026-02-28 00:57:33.306498 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:57:33.306670 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:57:33.306689 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:57:33.306707 | orchestrator | 2026-02-28 00:57:33.306718 | orchestrator | TASK [include_role : blazar] *************************************************** 2026-02-28 00:57:33.306729 | orchestrator | Saturday 28 February 2026 00:51:48 +0000 (0:00:02.376) 0:01:49.636 ***** 2026-02-28 00:57:33.306738 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:57:33.306782 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:57:33.306808 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:57:33.306827 | orchestrator | 2026-02-28 00:57:33.306857 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2026-02-28 00:57:33.306874 | orchestrator | Saturday 28 February 2026 00:51:48 +0000 (0:00:00.391) 0:01:50.027 ***** 2026-02-28 00:57:33.306890 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:57:33.307009 | orchestrator | 2026-02-28 00:57:33.307044 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2026-02-28 00:57:33.307061 | orchestrator | Saturday 28 February 2026 00:51:50 +0000 (0:00:01.330) 0:01:51.357 ***** 2026-02-28 00:57:33.307079 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}}) 2026-02-28 00:57:33.307110 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}}) 2026-02-28 00:57:33.307128 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}}) 2026-02-28 00:57:33.307149 | orchestrator | 2026-02-28 00:57:33.307168 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2026-02-28 00:57:33.307184 | orchestrator | Saturday 28 February 2026 00:51:57 +0000 (0:00:07.492) 0:01:58.850 ***** 2026-02-28 00:57:33.307195 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}})  2026-02-28 00:57:33.307204 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:57:33.307221 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}})  2026-02-28 00:57:33.307237 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:57:33.307246 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}})  2026-02-28 00:57:33.307255 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:57:33.307263 | orchestrator | 2026-02-28 00:57:33.307271 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2026-02-28 00:57:33.307280 | orchestrator | Saturday 28 February 2026 00:52:00 +0000 (0:00:02.601) 0:02:01.452 ***** 2026-02-28 00:57:33.307395 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-02-28 00:57:33.307414 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-02-28 00:57:33.307431 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:57:33.307446 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-02-28 00:57:33.307463 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-02-28 00:57:33.307478 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:57:33.307492 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-02-28 00:57:33.307524 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-02-28 00:57:33.307534 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:57:33.307542 | orchestrator | 2026-02-28 00:57:33.307550 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2026-02-28 00:57:33.307571 | orchestrator | Saturday 28 February 2026 00:52:03 +0000 (0:00:03.178) 0:02:04.630 ***** 2026-02-28 00:57:33.307579 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:57:33.307610 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:57:33.307618 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:57:33.307626 | orchestrator | 2026-02-28 00:57:33.307634 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2026-02-28 00:57:33.307642 | orchestrator | Saturday 28 February 2026 00:52:03 +0000 (0:00:00.427) 0:02:05.058 ***** 2026-02-28 00:57:33.307649 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:57:33.307657 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:57:33.307665 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:57:33.307673 | orchestrator | 2026-02-28 00:57:33.307681 | orchestrator | TASK [include_role : cinder] *************************************************** 2026-02-28 00:57:33.307689 | orchestrator | Saturday 28 February 2026 00:52:05 +0000 (0:00:01.529) 0:02:06.588 ***** 2026-02-28 00:57:33.307697 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:57:33.307705 | orchestrator | 2026-02-28 00:57:33.307713 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2026-02-28 00:57:33.307782 | orchestrator | Saturday 28 February 2026 00:52:06 +0000 (0:00:01.007) 0:02:07.595 ***** 2026-02-28 00:57:33.307797 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-28 00:57:33.307807 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-28 00:57:33.307817 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-28 00:57:33.307839 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-28 00:57:33.307848 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-28 00:57:33.307857 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-28 00:57:33.307870 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-28 00:57:33.307885 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-28 00:57:33.307899 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-28 00:57:33.307933 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-28 00:57:33.307942 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-28 00:57:33.307955 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-28 00:57:33.307963 | orchestrator | 2026-02-28 00:57:33.307971 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2026-02-28 00:57:33.307979 | orchestrator | Saturday 28 February 2026 00:52:10 +0000 (0:00:04.382) 0:02:11.978 ***** 2026-02-28 00:57:33.307988 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-28 00:57:33.308003 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-28 00:57:33.308017 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-28 00:57:33.308026 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-28 00:57:33.308069 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-28 00:57:33.308079 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-28 00:57:33.308094 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:57:33.308103 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-28 00:57:33.308117 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-28 00:57:33.308126 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:57:33.308135 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-28 00:57:33.308147 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-28 00:57:33.308157 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-28 00:57:33.308175 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-28 00:57:33.308184 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:57:33.308192 | orchestrator | 2026-02-28 00:57:33.308200 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2026-02-28 00:57:33.308208 | orchestrator | Saturday 28 February 2026 00:52:11 +0000 (0:00:01.033) 0:02:13.011 ***** 2026-02-28 00:57:33.308230 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-28 00:57:33.308244 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-28 00:57:33.308253 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:57:33.308262 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-28 00:57:33.308270 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-28 00:57:33.308278 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:57:33.308286 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-28 00:57:33.308295 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-28 00:57:33.308303 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:57:33.308311 | orchestrator | 2026-02-28 00:57:33.308319 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2026-02-28 00:57:33.308327 | orchestrator | Saturday 28 February 2026 00:52:13 +0000 (0:00:01.096) 0:02:14.107 ***** 2026-02-28 00:57:33.308335 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:57:33.308342 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:57:33.308350 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:57:33.308358 | orchestrator | 2026-02-28 00:57:33.308366 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2026-02-28 00:57:33.308374 | orchestrator | Saturday 28 February 2026 00:52:15 +0000 (0:00:02.142) 0:02:16.250 ***** 2026-02-28 00:57:33.308388 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:57:33.308397 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:57:33.308405 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:57:33.308414 | orchestrator | 2026-02-28 00:57:33.308428 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2026-02-28 00:57:33.308445 | orchestrator | Saturday 28 February 2026 00:52:17 +0000 (0:00:02.413) 0:02:18.663 ***** 2026-02-28 00:57:33.308465 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:57:33.308478 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:57:33.308491 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:57:33.308505 | orchestrator | 2026-02-28 00:57:33.308518 | orchestrator | TASK [include_role : cyborg] *************************************************** 2026-02-28 00:57:33.308532 | orchestrator | Saturday 28 February 2026 00:52:17 +0000 (0:00:00.386) 0:02:19.049 ***** 2026-02-28 00:57:33.308545 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:57:33.308558 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:57:33.308573 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:57:33.308614 | orchestrator | 2026-02-28 00:57:33.308623 | orchestrator | TASK [include_role : designate] ************************************************ 2026-02-28 00:57:33.308632 | orchestrator | Saturday 28 February 2026 00:52:18 +0000 (0:00:00.335) 0:02:19.385 ***** 2026-02-28 00:57:33.308640 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:57:33.308648 | orchestrator | 2026-02-28 00:57:33.308656 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2026-02-28 00:57:33.308667 | orchestrator | Saturday 28 February 2026 00:52:19 +0000 (0:00:01.061) 0:02:20.446 ***** 2026-02-28 00:57:33.308681 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-28 00:57:33.308703 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-28 00:57:33.308781 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-28 00:57:33.308812 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-28 00:57:33.308826 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-28 00:57:33.308835 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-28 00:57:33.308844 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-28 00:57:33.308873 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-28 00:57:33.308885 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-28 00:57:33.308908 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-28 00:57:33.308929 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-28 00:57:33.309058 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-28 00:57:33.309071 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2025.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-28 00:57:33.309079 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2025.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-28 00:57:33.309096 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-28 00:57:33.309114 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-28 00:57:33.309127 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-28 00:57:33.309136 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-28 00:57:33.309145 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-28 00:57:33.309153 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-28 00:57:33.309167 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2025.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-28 00:57:33.309185 | orchestrator | 2026-02-28 00:57:33.309194 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2026-02-28 00:57:33.309202 | orchestrator | Saturday 28 February 2026 00:52:23 +0000 (0:00:04.100) 0:02:24.547 ***** 2026-02-28 00:57:33.309211 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-02-28 00:57:33.309223 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-28 00:57:33.309232 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-28 00:57:33.309240 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-28 00:57:33.309249 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-28 00:57:33.309262 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-28 00:57:33.309276 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2025.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-28 00:57:33.309284 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:57:33.309297 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-02-28 00:57:33.309305 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-28 00:57:33.309313 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-28 00:57:33.309321 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-28 00:57:33.309340 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-28 00:57:33.309349 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-28 00:57:33.309357 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2025.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-28 00:57:33.309365 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:57:33.309378 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-02-28 00:57:33.309386 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-28 00:57:33.309395 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-28 00:57:33.309413 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-28 00:57:33.309424 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-28 00:57:33.309438 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-28 00:57:33.309462 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2025.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-28 00:57:33.309476 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:57:33.309490 | orchestrator | 2026-02-28 00:57:33.309503 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2026-02-28 00:57:33.309536 | orchestrator | Saturday 28 February 2026 00:52:24 +0000 (0:00:01.199) 0:02:25.747 ***** 2026-02-28 00:57:33.309552 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-02-28 00:57:33.309567 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-02-28 00:57:33.309632 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:57:33.309644 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-02-28 00:57:33.309653 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-02-28 00:57:33.309668 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:57:33.309676 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-02-28 00:57:33.309690 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-02-28 00:57:33.309704 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:57:33.309760 | orchestrator | 2026-02-28 00:57:33.309782 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2026-02-28 00:57:33.309795 | orchestrator | Saturday 28 February 2026 00:52:26 +0000 (0:00:01.705) 0:02:27.453 ***** 2026-02-28 00:57:33.309807 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:57:33.309819 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:57:33.309832 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:57:33.309880 | orchestrator | 2026-02-28 00:57:33.309895 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2026-02-28 00:57:33.309909 | orchestrator | Saturday 28 February 2026 00:52:27 +0000 (0:00:01.351) 0:02:28.805 ***** 2026-02-28 00:57:33.309922 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:57:33.309935 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:57:33.309948 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:57:33.309961 | orchestrator | 2026-02-28 00:57:33.309972 | orchestrator | TASK [include_role : etcd] ***************************************************** 2026-02-28 00:57:33.310008 | orchestrator | Saturday 28 February 2026 00:52:30 +0000 (0:00:02.755) 0:02:31.560 ***** 2026-02-28 00:57:33.311385 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:57:33.311423 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:57:33.311431 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:57:33.311437 | orchestrator | 2026-02-28 00:57:33.311445 | orchestrator | TASK [include_role : glance] *************************************************** 2026-02-28 00:57:33.311452 | orchestrator | Saturday 28 February 2026 00:52:30 +0000 (0:00:00.390) 0:02:31.951 ***** 2026-02-28 00:57:33.311460 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:57:33.311466 | orchestrator | 2026-02-28 00:57:33.311473 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2026-02-28 00:57:33.311480 | orchestrator | Saturday 28 February 2026 00:52:31 +0000 (0:00:01.095) 0:02:33.046 ***** 2026-02-28 00:57:33.311496 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-28 00:57:33.311531 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2025.1', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-28 00:57:33.311543 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-28 00:57:33.311563 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2025.1', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-28 00:57:33.311575 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-28 00:57:33.311609 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2025.1', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-28 00:57:33.311621 | orchestrator | 2026-02-28 00:57:33.311633 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2026-02-28 00:57:33.311642 | orchestrator | Saturday 28 February 2026 00:52:37 +0000 (0:00:05.053) 0:02:38.100 ***** 2026-02-28 00:57:33.311650 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-28 00:57:33.311662 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2025.1', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-28 00:57:33.311674 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:57:33.311689 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-28 00:57:33.311701 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2025.1', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-28 00:57:33.311714 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:57:33.311727 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-28 00:57:33.311739 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2025.1', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-28 00:57:33.311752 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:57:33.311760 | orchestrator | 2026-02-28 00:57:33.311768 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2026-02-28 00:57:33.311776 | orchestrator | Saturday 28 February 2026 00:52:41 +0000 (0:00:04.189) 0:02:42.290 ***** 2026-02-28 00:57:33.311783 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-28 00:57:33.311798 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-28 00:57:33.311806 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:57:33.311814 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-28 00:57:33.311823 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-28 00:57:33.311831 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:57:33.311838 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-28 00:57:33.311853 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-28 00:57:33.311861 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:57:33.311869 | orchestrator | 2026-02-28 00:57:33.311877 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2026-02-28 00:57:33.311884 | orchestrator | Saturday 28 February 2026 00:52:45 +0000 (0:00:04.480) 0:02:46.771 ***** 2026-02-28 00:57:33.311892 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:57:33.311899 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:57:33.311906 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:57:33.311912 | orchestrator | 2026-02-28 00:57:33.311919 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2026-02-28 00:57:33.311926 | orchestrator | Saturday 28 February 2026 00:52:47 +0000 (0:00:01.294) 0:02:48.066 ***** 2026-02-28 00:57:33.311933 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:57:33.311939 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:57:33.311946 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:57:33.311953 | orchestrator | 2026-02-28 00:57:33.311959 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2026-02-28 00:57:33.311966 | orchestrator | Saturday 28 February 2026 00:52:48 +0000 (0:00:01.782) 0:02:49.848 ***** 2026-02-28 00:57:33.311973 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:57:33.311979 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:57:33.311986 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:57:33.311993 | orchestrator | 2026-02-28 00:57:33.311999 | orchestrator | TASK [include_role : grafana] ************************************************** 2026-02-28 00:57:33.312006 | orchestrator | Saturday 28 February 2026 00:52:49 +0000 (0:00:00.282) 0:02:50.130 ***** 2026-02-28 00:57:33.312013 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:57:33.312019 | orchestrator | 2026-02-28 00:57:33.312026 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2026-02-28 00:57:33.312033 | orchestrator | Saturday 28 February 2026 00:52:49 +0000 (0:00:00.813) 0:02:50.943 ***** 2026-02-28 00:57:33.312044 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-28 00:57:33.312052 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-28 00:57:33.312066 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-28 00:57:33.312073 | orchestrator | 2026-02-28 00:57:33.312082 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2026-02-28 00:57:33.312089 | orchestrator | Saturday 28 February 2026 00:52:54 +0000 (0:00:04.447) 0:02:55.391 ***** 2026-02-28 00:57:33.312096 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-02-28 00:57:33.312103 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:57:33.312110 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-02-28 00:57:33.312117 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:57:33.312129 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-02-28 00:57:33.312136 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:57:33.312143 | orchestrator | 2026-02-28 00:57:33.312150 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2026-02-28 00:57:33.312157 | orchestrator | Saturday 28 February 2026 00:52:54 +0000 (0:00:00.561) 0:02:55.952 ***** 2026-02-28 00:57:33.312168 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-02-28 00:57:33.312176 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-02-28 00:57:33.312183 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:57:33.312189 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-02-28 00:57:33.312201 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-02-28 00:57:33.312208 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-02-28 00:57:33.312215 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:57:33.312224 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-02-28 00:57:33.312231 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:57:33.312238 | orchestrator | 2026-02-28 00:57:33.312245 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2026-02-28 00:57:33.312251 | orchestrator | Saturday 28 February 2026 00:52:55 +0000 (0:00:00.973) 0:02:56.925 ***** 2026-02-28 00:57:33.312258 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:57:33.312265 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:57:33.312272 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:57:33.312278 | orchestrator | 2026-02-28 00:57:33.312285 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2026-02-28 00:57:33.312292 | orchestrator | Saturday 28 February 2026 00:52:57 +0000 (0:00:01.769) 0:02:58.695 ***** 2026-02-28 00:57:33.312298 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:57:33.312305 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:57:33.312312 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:57:33.312319 | orchestrator | 2026-02-28 00:57:33.312325 | orchestrator | TASK [include_role : heat] ***************************************************** 2026-02-28 00:57:33.312332 | orchestrator | Saturday 28 February 2026 00:53:00 +0000 (0:00:02.629) 0:03:01.325 ***** 2026-02-28 00:57:33.312339 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:57:33.312345 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:57:33.312352 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:57:33.312359 | orchestrator | 2026-02-28 00:57:33.312365 | orchestrator | TASK [include_role : horizon] ************************************************** 2026-02-28 00:57:33.312372 | orchestrator | Saturday 28 February 2026 00:53:00 +0000 (0:00:00.453) 0:03:01.778 ***** 2026-02-28 00:57:33.312379 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:57:33.312385 | orchestrator | 2026-02-28 00:57:33.312392 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2026-02-28 00:57:33.312399 | orchestrator | Saturday 28 February 2026 00:53:01 +0000 (0:00:01.019) 0:03:02.798 ***** 2026-02-28 00:57:33.312411 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-28 00:57:33.312426 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-28 00:57:33.312443 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-28 00:57:33.312455 | orchestrator | 2026-02-28 00:57:33.312465 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2026-02-28 00:57:33.312482 | orchestrator | Saturday 28 February 2026 00:53:07 +0000 (0:00:06.245) 0:03:09.044 ***** 2026-02-28 00:57:33.312506 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-28 00:57:33.312526 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:57:33.312542 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-28 00:57:33.312553 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:57:33.312570 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-28 00:57:33.312640 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:57:33.312654 | orchestrator | 2026-02-28 00:57:33.312666 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2026-02-28 00:57:33.312676 | orchestrator | Saturday 28 February 2026 00:53:09 +0000 (0:00:01.347) 0:03:10.391 ***** 2026-02-28 00:57:33.312684 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-02-28 00:57:33.312693 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-28 00:57:33.312700 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-02-28 00:57:33.312713 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-02-28 00:57:33.312721 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-28 00:57:33.312727 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-02-28 00:57:33.312735 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:57:33.312742 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-28 00:57:33.312749 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-02-28 00:57:33.312761 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-28 00:57:33.312768 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-02-28 00:57:33.312774 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:57:33.312786 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-02-28 00:57:33.312794 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-28 00:57:33.312801 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-02-28 00:57:33.312808 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-28 00:57:33.312815 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-02-28 00:57:33.312821 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:57:33.312828 | orchestrator | 2026-02-28 00:57:33.312835 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2026-02-28 00:57:33.312842 | orchestrator | Saturday 28 February 2026 00:53:10 +0000 (0:00:01.314) 0:03:11.706 ***** 2026-02-28 00:57:33.312849 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:57:33.312855 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:57:33.312862 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:57:33.312869 | orchestrator | 2026-02-28 00:57:33.312875 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2026-02-28 00:57:33.312882 | orchestrator | Saturday 28 February 2026 00:53:12 +0000 (0:00:01.425) 0:03:13.131 ***** 2026-02-28 00:57:33.312889 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:57:33.312895 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:57:33.312902 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:57:33.312909 | orchestrator | 2026-02-28 00:57:33.312919 | orchestrator | TASK [include_role : influxdb] ************************************************* 2026-02-28 00:57:33.312926 | orchestrator | Saturday 28 February 2026 00:53:14 +0000 (0:00:02.016) 0:03:15.147 ***** 2026-02-28 00:57:33.312933 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:57:33.312939 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:57:33.312946 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:57:33.312957 | orchestrator | 2026-02-28 00:57:33.312964 | orchestrator | TASK [include_role : ironic] *************************************************** 2026-02-28 00:57:33.312971 | orchestrator | Saturday 28 February 2026 00:53:14 +0000 (0:00:00.395) 0:03:15.543 ***** 2026-02-28 00:57:33.312977 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:57:33.312984 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:57:33.312991 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:57:33.312997 | orchestrator | 2026-02-28 00:57:33.313004 | orchestrator | TASK [include_role : keystone] ************************************************* 2026-02-28 00:57:33.313011 | orchestrator | Saturday 28 February 2026 00:53:15 +0000 (0:00:00.586) 0:03:16.129 ***** 2026-02-28 00:57:33.313017 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:57:33.313024 | orchestrator | 2026-02-28 00:57:33.313031 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2026-02-28 00:57:33.313037 | orchestrator | Saturday 28 February 2026 00:53:16 +0000 (0:00:01.566) 0:03:17.696 ***** 2026-02-28 00:57:33.313045 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-02-28 00:57:33.313056 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-28 00:57:33.313065 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-28 00:57:33.313078 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-02-28 00:57:33.313091 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-28 00:57:33.313098 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-28 00:57:33.313110 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-02-28 00:57:33.313118 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-28 00:57:33.313125 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-28 00:57:33.313136 | orchestrator | 2026-02-28 00:57:33.313143 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2026-02-28 00:57:33.313150 | orchestrator | Saturday 28 February 2026 00:53:21 +0000 (0:00:05.017) 0:03:22.713 ***** 2026-02-28 00:57:33.313159 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-02-28 00:57:33.313167 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-02-28 00:57:33.313177 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-28 00:57:33.313184 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-28 00:57:33.313191 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-28 00:57:33.313203 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-28 00:57:33.313210 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:57:33.313217 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:57:33.313223 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-02-28 00:57:33.313230 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-28 00:57:33.313354 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-28 00:57:33.313364 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:57:33.313370 | orchestrator | 2026-02-28 00:57:33.313376 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2026-02-28 00:57:33.313383 | orchestrator | Saturday 28 February 2026 00:53:22 +0000 (0:00:00.947) 0:03:23.661 ***** 2026-02-28 00:57:33.313389 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-02-28 00:57:33.313396 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-02-28 00:57:33.313409 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:57:33.313415 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-02-28 00:57:33.313422 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-02-28 00:57:33.313428 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:57:33.313435 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-02-28 00:57:33.313442 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-02-28 00:57:33.313448 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:57:33.313454 | orchestrator | 2026-02-28 00:57:33.313460 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2026-02-28 00:57:33.313467 | orchestrator | Saturday 28 February 2026 00:53:23 +0000 (0:00:00.959) 0:03:24.620 ***** 2026-02-28 00:57:33.313473 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:57:33.313479 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:57:33.313485 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:57:33.313491 | orchestrator | 2026-02-28 00:57:33.313498 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2026-02-28 00:57:33.313504 | orchestrator | Saturday 28 February 2026 00:53:24 +0000 (0:00:01.249) 0:03:25.869 ***** 2026-02-28 00:57:33.313510 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:57:33.313516 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:57:33.313522 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:57:33.313529 | orchestrator | 2026-02-28 00:57:33.313535 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2026-02-28 00:57:33.313541 | orchestrator | Saturday 28 February 2026 00:53:27 +0000 (0:00:02.256) 0:03:28.125 ***** 2026-02-28 00:57:33.313547 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:57:33.313553 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:57:33.313559 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:57:33.313565 | orchestrator | 2026-02-28 00:57:33.313572 | orchestrator | TASK [include_role : magnum] *************************************************** 2026-02-28 00:57:33.313578 | orchestrator | Saturday 28 February 2026 00:53:27 +0000 (0:00:00.343) 0:03:28.469 ***** 2026-02-28 00:57:33.313606 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:57:33.313612 | orchestrator | 2026-02-28 00:57:33.313618 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2026-02-28 00:57:33.313624 | orchestrator | Saturday 28 February 2026 00:53:28 +0000 (0:00:01.317) 0:03:29.787 ***** 2026-02-28 00:57:33.313636 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-28 00:57:33.313700 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-28 00:57:33.313728 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-28 00:57:33.313739 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-28 00:57:33.313750 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-28 00:57:33.313776 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-28 00:57:33.313788 | orchestrator | 2026-02-28 00:57:33.313798 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2026-02-28 00:57:33.313808 | orchestrator | Saturday 28 February 2026 00:53:33 +0000 (0:00:04.301) 0:03:34.088 ***** 2026-02-28 00:57:33.313823 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-02-28 00:57:33.313835 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-28 00:57:33.313843 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:57:33.313850 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-02-28 00:57:33.313867 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-28 00:57:33.313873 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:57:33.313880 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-02-28 00:57:33.313890 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-28 00:57:33.313896 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:57:33.313903 | orchestrator | 2026-02-28 00:57:33.313909 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2026-02-28 00:57:33.313915 | orchestrator | Saturday 28 February 2026 00:53:34 +0000 (0:00:01.191) 0:03:35.279 ***** 2026-02-28 00:57:33.313922 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-02-28 00:57:33.313930 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-02-28 00:57:33.313936 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:57:33.313943 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-02-28 00:57:33.313949 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-02-28 00:57:33.313960 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-02-28 00:57:33.313990 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:57:33.313998 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-02-28 00:57:33.314005 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:57:33.314012 | orchestrator | 2026-02-28 00:57:33.314058 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2026-02-28 00:57:33.314066 | orchestrator | Saturday 28 February 2026 00:53:35 +0000 (0:00:01.470) 0:03:36.750 ***** 2026-02-28 00:57:33.314077 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:57:33.314085 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:57:33.314092 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:57:33.314099 | orchestrator | 2026-02-28 00:57:33.314106 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2026-02-28 00:57:33.314113 | orchestrator | Saturday 28 February 2026 00:53:37 +0000 (0:00:01.882) 0:03:38.632 ***** 2026-02-28 00:57:33.314120 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:57:33.314127 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:57:33.314134 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:57:33.314141 | orchestrator | 2026-02-28 00:57:33.314148 | orchestrator | TASK [include_role : manila] *************************************************** 2026-02-28 00:57:33.314155 | orchestrator | Saturday 28 February 2026 00:53:40 +0000 (0:00:02.524) 0:03:41.156 ***** 2026-02-28 00:57:33.314162 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:57:33.314169 | orchestrator | 2026-02-28 00:57:33.314176 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2026-02-28 00:57:33.314183 | orchestrator | Saturday 28 February 2026 00:53:41 +0000 (0:00:01.160) 0:03:42.317 ***** 2026-02-28 00:57:33.314191 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-28 00:57:33.314202 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-28 00:57:33.314214 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-28 00:57:33.314222 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-28 00:57:33.314235 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-28 00:57:33.314243 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-28 00:57:33.314250 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-28 00:57:33.314261 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-28 00:57:33.314268 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-28 00:57:33.314280 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-28 00:57:33.314298 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-28 00:57:33.314306 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-28 00:57:33.314313 | orchestrator | 2026-02-28 00:57:33.314320 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2026-02-28 00:57:33.314327 | orchestrator | Saturday 28 February 2026 00:53:47 +0000 (0:00:05.781) 0:03:48.098 ***** 2026-02-28 00:57:33.314338 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-02-28 00:57:33.314346 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-28 00:57:33.314356 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-28 00:57:33.314362 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-28 00:57:33.314369 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:57:33.314380 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-02-28 00:57:33.314387 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-28 00:57:33.314396 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-28 00:57:33.314409 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-02-28 00:57:33.314416 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-28 00:57:33.314426 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-28 00:57:33.314432 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:57:33.314439 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-28 00:57:33.314446 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-28 00:57:33.314452 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:57:33.314458 | orchestrator | 2026-02-28 00:57:33.314465 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2026-02-28 00:57:33.314471 | orchestrator | Saturday 28 February 2026 00:53:48 +0000 (0:00:01.397) 0:03:49.496 ***** 2026-02-28 00:57:33.314484 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-02-28 00:57:33.314491 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-02-28 00:57:33.314497 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:57:33.314504 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-02-28 00:57:33.314510 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-02-28 00:57:33.314517 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:57:33.314523 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-02-28 00:57:33.314529 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-02-28 00:57:33.314536 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:57:33.314543 | orchestrator | 2026-02-28 00:57:33.314553 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2026-02-28 00:57:33.314564 | orchestrator | Saturday 28 February 2026 00:53:49 +0000 (0:00:01.022) 0:03:50.519 ***** 2026-02-28 00:57:33.314574 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:57:33.314604 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:57:33.314614 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:57:33.314623 | orchestrator | 2026-02-28 00:57:33.314632 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2026-02-28 00:57:33.314642 | orchestrator | Saturday 28 February 2026 00:53:50 +0000 (0:00:01.080) 0:03:51.599 ***** 2026-02-28 00:57:33.314652 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:57:33.314662 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:57:33.314671 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:57:33.314681 | orchestrator | 2026-02-28 00:57:33.314709 | orchestrator | TASK [include_role : mariadb] ************************************************** 2026-02-28 00:57:33.314721 | orchestrator | Saturday 28 February 2026 00:53:52 +0000 (0:00:01.937) 0:03:53.536 ***** 2026-02-28 00:57:33.314737 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:57:33.314748 | orchestrator | 2026-02-28 00:57:33.314759 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2026-02-28 00:57:33.314770 | orchestrator | Saturday 28 February 2026 00:53:53 +0000 (0:00:01.251) 0:03:54.788 ***** 2026-02-28 00:57:33.314780 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-28 00:57:33.314790 | orchestrator | 2026-02-28 00:57:33.314800 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2026-02-28 00:57:33.314811 | orchestrator | Saturday 28 February 2026 00:53:56 +0000 (0:00:02.687) 0:03:57.475 ***** 2026-02-28 00:57:33.314827 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-28 00:57:33.314847 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2025.1', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-28 00:57:33.314858 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:57:33.314876 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-28 00:57:33.314889 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2025.1', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-28 00:57:33.314906 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:57:33.314922 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-28 00:57:33.314933 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2025.1', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-28 00:57:33.314944 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:57:33.314955 | orchestrator | 2026-02-28 00:57:33.314965 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2026-02-28 00:57:33.314976 | orchestrator | Saturday 28 February 2026 00:53:58 +0000 (0:00:02.289) 0:03:59.765 ***** 2026-02-28 00:57:33.314995 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-28 00:57:33.315017 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2025.1', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-28 00:57:33.315028 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:57:33.315040 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-28 00:57:33.315055 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2025.1', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-28 00:57:33.315072 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:57:33.315088 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-28 00:57:33.315100 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2025.1', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-28 00:57:33.315111 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:57:33.315121 | orchestrator | 2026-02-28 00:57:33.315132 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2026-02-28 00:57:33.315142 | orchestrator | Saturday 28 February 2026 00:54:01 +0000 (0:00:03.150) 0:04:02.915 ***** 2026-02-28 00:57:33.315153 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-28 00:57:33.315169 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-28 00:57:33.315187 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:57:33.315199 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-28 00:57:33.315210 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-28 00:57:33.315221 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:57:33.315236 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-28 00:57:33.315247 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-28 00:57:33.315257 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:57:33.315268 | orchestrator | 2026-02-28 00:57:33.315279 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2026-02-28 00:57:33.315290 | orchestrator | Saturday 28 February 2026 00:54:04 +0000 (0:00:03.098) 0:04:06.014 ***** 2026-02-28 00:57:33.315301 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:57:33.315312 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:57:33.315322 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:57:33.315333 | orchestrator | 2026-02-28 00:57:33.315343 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2026-02-28 00:57:33.315354 | orchestrator | Saturday 28 February 2026 00:54:08 +0000 (0:00:03.311) 0:04:09.325 ***** 2026-02-28 00:57:33.315364 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:57:33.315374 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:57:33.315385 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:57:33.315395 | orchestrator | 2026-02-28 00:57:33.315406 | orchestrator | TASK [include_role : masakari] ************************************************* 2026-02-28 00:57:33.315416 | orchestrator | Saturday 28 February 2026 00:54:10 +0000 (0:00:02.011) 0:04:11.337 ***** 2026-02-28 00:57:33.315433 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:57:33.315443 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:57:33.315453 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:57:33.315464 | orchestrator | 2026-02-28 00:57:33.315475 | orchestrator | TASK [include_role : memcached] ************************************************ 2026-02-28 00:57:33.315486 | orchestrator | Saturday 28 February 2026 00:54:10 +0000 (0:00:00.352) 0:04:11.690 ***** 2026-02-28 00:57:33.315496 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:57:33.315507 | orchestrator | 2026-02-28 00:57:33.315517 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2026-02-28 00:57:33.315528 | orchestrator | Saturday 28 February 2026 00:54:12 +0000 (0:00:01.449) 0:04:13.140 ***** 2026-02-28 00:57:33.315544 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2025.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-02-28 00:57:33.315555 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2025.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-02-28 00:57:33.315570 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2025.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-02-28 00:57:33.315578 | orchestrator | 2026-02-28 00:57:33.315606 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2026-02-28 00:57:33.315613 | orchestrator | Saturday 28 February 2026 00:54:13 +0000 (0:00:01.627) 0:04:14.767 ***** 2026-02-28 00:57:33.315619 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2025.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-02-28 00:57:33.315634 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:57:33.315641 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2025.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-02-28 00:57:33.315647 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:57:33.315658 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2025.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-02-28 00:57:33.315665 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:57:33.315671 | orchestrator | 2026-02-28 00:57:33.315677 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2026-02-28 00:57:33.315684 | orchestrator | Saturday 28 February 2026 00:54:14 +0000 (0:00:00.478) 0:04:15.246 ***** 2026-02-28 00:57:33.315690 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-02-28 00:57:33.315697 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:57:33.315703 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-02-28 00:57:33.315710 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:57:33.315716 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-02-28 00:57:33.315723 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:57:33.315729 | orchestrator | 2026-02-28 00:57:33.315735 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2026-02-28 00:57:33.315745 | orchestrator | Saturday 28 February 2026 00:54:15 +0000 (0:00:01.028) 0:04:16.275 ***** 2026-02-28 00:57:33.315751 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:57:33.315757 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:57:33.315764 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:57:33.315770 | orchestrator | 2026-02-28 00:57:33.315776 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2026-02-28 00:57:33.315783 | orchestrator | Saturday 28 February 2026 00:54:15 +0000 (0:00:00.576) 0:04:16.852 ***** 2026-02-28 00:57:33.315789 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:57:33.315795 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:57:33.315805 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:57:33.315812 | orchestrator | 2026-02-28 00:57:33.315818 | orchestrator | TASK [include_role : mistral] ************************************************** 2026-02-28 00:57:33.315828 | orchestrator | Saturday 28 February 2026 00:54:17 +0000 (0:00:01.381) 0:04:18.233 ***** 2026-02-28 00:57:33.315834 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:57:33.315841 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:57:33.315847 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:57:33.315853 | orchestrator | 2026-02-28 00:57:33.315859 | orchestrator | TASK [include_role : neutron] ************************************************** 2026-02-28 00:57:33.315865 | orchestrator | Saturday 28 February 2026 00:54:17 +0000 (0:00:00.428) 0:04:18.661 ***** 2026-02-28 00:57:33.315872 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:57:33.315878 | orchestrator | 2026-02-28 00:57:33.315884 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2026-02-28 00:57:33.315890 | orchestrator | Saturday 28 February 2026 00:54:19 +0000 (0:00:02.090) 0:04:20.751 ***** 2026-02-28 00:57:33.315897 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-28 00:57:33.315909 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2025.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-28 00:57:33.315916 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/neutron-dhcp-agent:2025.1', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-02-28 00:57:33.315926 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2025.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/neutron-l3-agent:2025.1', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-02-28 00:57:33.315937 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-28 00:57:33.315949 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-28 00:57:33.315958 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2025.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-28 00:57:33.315965 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-28 00:57:33.315972 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-28 00:57:33.315985 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2025.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-28 00:57:33.315992 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-28 00:57:33.316003 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/neutron-dhcp-agent:2025.1', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-02-28 00:57:33.316009 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-28 00:57:33.316016 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2025.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/neutron-l3-agent:2025.1', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-02-28 00:57:33.316030 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-02-28 00:57:33.316037 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-28 00:57:33.316044 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2025.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-28 00:57:33.316050 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-28 00:57:33.316061 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-28 00:57:33.316068 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-28 00:57:33.316074 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2025.1', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-28 00:57:33.316088 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-28 00:57:33.316095 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2025.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-28 00:57:33.316106 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-28 00:57:33.316112 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2025.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-28 00:57:33.316123 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-02-28 00:57:33.316139 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-28 00:57:33.316155 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2025.1', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-28 00:57:33.316165 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2025.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-28 00:57:33.316180 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2025.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-28 00:57:33.316191 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-28 00:57:33.316209 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2025.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-28 00:57:33.316225 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/neutron-dhcp-agent:2025.1', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-02-28 00:57:33.316236 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2025.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/neutron-l3-agent:2025.1', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-02-28 00:57:33.316387 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-28 00:57:33.316400 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2025.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-28 00:57:33.316407 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-28 00:57:33.316437 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-28 00:57:33.316449 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-28 00:57:33.316461 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-28 00:57:33.316473 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-02-28 00:57:33.316522 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-28 00:57:33.316531 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2025.1', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-28 00:57:33.316546 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2025.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-28 00:57:33.316554 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2025.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-28 00:57:33.316561 | orchestrator | 2026-02-28 00:57:33.316567 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2026-02-28 00:57:33.316574 | orchestrator | Saturday 28 February 2026 00:54:25 +0000 (0:00:06.284) 0:04:27.036 ***** 2026-02-28 00:57:33.316626 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-02-28 00:57:33.316685 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2025.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-28 00:57:33.316701 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/neutron-dhcp-agent:2025.1', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-02-28 00:57:33.316711 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2025.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/neutron-l3-agent:2025.1', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-02-28 00:57:33.316718 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-28 00:57:33.316725 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2025.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-28 00:57:33.316731 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-28 00:57:33.316773 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-28 00:57:33.316784 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-28 00:57:33.316794 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-28 00:57:33.316800 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-02-28 00:57:33.316805 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-28 00:57:33.316811 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2025.1', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-28 00:57:33.316852 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2025.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-28 00:57:33.316864 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2025.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-28 00:57:33.316870 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:57:33.316896 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-02-28 00:57:33.316903 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2025.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-28 00:57:33.316947 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/neutron-dhcp-agent:2025.1', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-02-28 00:57:33.316959 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2025.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/neutron-l3-agent:2025.1', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-02-28 00:57:33.316965 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-28 00:57:33.316974 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2025.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-28 00:57:33.316980 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-02-28 00:57:33.316986 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-28 00:57:33.317040 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2025.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-28 00:57:33.317058 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-28 00:57:33.317067 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/neutron-dhcp-agent:2025.1', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-02-28 00:57:33.317076 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-28 00:57:33.317084 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2025.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/neutron-l3-agent:2025.1', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-02-28 00:57:33.317159 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-28 00:57:33.317179 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-28 00:57:33.317208 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-02-28 00:57:33.317226 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2025.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-28 00:57:33.317236 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-28 00:57:33.317262 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-28 00:57:33.317268 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2025.1', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-28 00:57:33.317332 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2025.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-28 00:57:33.317341 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-28 00:57:33.317347 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2025.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-28 00:57:33.317352 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:57:33.317362 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-28 00:57:33.317368 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-28 00:57:33.317385 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-02-28 00:57:33.317435 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-28 00:57:33.317443 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2025.1', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-28 00:57:33.317453 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2025.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-28 00:57:33.317459 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2025.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-28 00:57:33.317465 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:57:33.317470 | orchestrator | 2026-02-28 00:57:33.317476 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2026-02-28 00:57:33.317482 | orchestrator | Saturday 28 February 2026 00:54:28 +0000 (0:00:02.108) 0:04:29.145 ***** 2026-02-28 00:57:33.317488 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-02-28 00:57:33.317494 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-02-28 00:57:33.317504 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:57:33.317510 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-02-28 00:57:33.317516 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-02-28 00:57:33.317521 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:57:33.317527 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-02-28 00:57:33.317569 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-02-28 00:57:33.317577 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:57:33.317603 | orchestrator | 2026-02-28 00:57:33.317617 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2026-02-28 00:57:33.317627 | orchestrator | Saturday 28 February 2026 00:54:30 +0000 (0:00:01.999) 0:04:31.145 ***** 2026-02-28 00:57:33.317634 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:57:33.317642 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:57:33.317651 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:57:33.317671 | orchestrator | 2026-02-28 00:57:33.317679 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2026-02-28 00:57:33.317687 | orchestrator | Saturday 28 February 2026 00:54:31 +0000 (0:00:01.157) 0:04:32.303 ***** 2026-02-28 00:57:33.317695 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:57:33.317702 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:57:33.317710 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:57:33.317719 | orchestrator | 2026-02-28 00:57:33.317727 | orchestrator | TASK [include_role : placement] ************************************************ 2026-02-28 00:57:33.317736 | orchestrator | Saturday 28 February 2026 00:54:33 +0000 (0:00:02.061) 0:04:34.364 ***** 2026-02-28 00:57:33.317744 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:57:33.317753 | orchestrator | 2026-02-28 00:57:33.317761 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2026-02-28 00:57:33.317769 | orchestrator | Saturday 28 February 2026 00:54:34 +0000 (0:00:01.513) 0:04:35.878 ***** 2026-02-28 00:57:33.317783 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-02-28 00:57:33.317801 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-02-28 00:57:33.317886 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-02-28 00:57:33.317897 | orchestrator | 2026-02-28 00:57:33.317902 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2026-02-28 00:57:33.317908 | orchestrator | Saturday 28 February 2026 00:54:38 +0000 (0:00:03.807) 0:04:39.686 ***** 2026-02-28 00:57:33.317914 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-02-28 00:57:33.317920 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:57:33.317930 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-02-28 00:57:33.317941 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:57:33.317947 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-02-28 00:57:33.317953 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:57:33.317959 | orchestrator | 2026-02-28 00:57:33.317964 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2026-02-28 00:57:33.317970 | orchestrator | Saturday 28 February 2026 00:54:39 +0000 (0:00:00.676) 0:04:40.363 ***** 2026-02-28 00:57:33.317975 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-02-28 00:57:33.318040 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-02-28 00:57:33.318050 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:57:33.318056 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-02-28 00:57:33.318062 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-02-28 00:57:33.318068 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:57:33.318073 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-02-28 00:57:33.318079 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-02-28 00:57:33.318085 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:57:33.318090 | orchestrator | 2026-02-28 00:57:33.318096 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2026-02-28 00:57:33.318106 | orchestrator | Saturday 28 February 2026 00:54:40 +0000 (0:00:01.280) 0:04:41.643 ***** 2026-02-28 00:57:33.318112 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:57:33.318117 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:57:33.318122 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:57:33.318128 | orchestrator | 2026-02-28 00:57:33.318133 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2026-02-28 00:57:33.318139 | orchestrator | Saturday 28 February 2026 00:54:41 +0000 (0:00:01.340) 0:04:42.984 ***** 2026-02-28 00:57:33.318147 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:57:33.318153 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:57:33.318159 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:57:33.318164 | orchestrator | 2026-02-28 00:57:33.318170 | orchestrator | TASK [include_role : nova] ***************************************************** 2026-02-28 00:57:33.318175 | orchestrator | Saturday 28 February 2026 00:54:44 +0000 (0:00:02.363) 0:04:45.348 ***** 2026-02-28 00:57:33.318181 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:57:33.318186 | orchestrator | 2026-02-28 00:57:33.318191 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2026-02-28 00:57:33.318197 | orchestrator | Saturday 28 February 2026 00:54:45 +0000 (0:00:01.481) 0:04:46.829 ***** 2026-02-28 00:57:33.318203 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-28 00:57:33.318248 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-28 00:57:33.318257 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-28 00:57:33.318270 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-28 00:57:33.318277 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-28 00:57:33.318283 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2025.1', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-28 00:57:33.318335 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-28 00:57:33.318344 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-28 00:57:33.318354 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2025.1', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-28 00:57:33.318366 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-28 00:57:33.318372 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-28 00:57:33.318395 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2025.1', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-28 00:57:33.318401 | orchestrator | 2026-02-28 00:57:33.318407 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2026-02-28 00:57:33.318412 | orchestrator | Saturday 28 February 2026 00:54:51 +0000 (0:00:05.609) 0:04:52.438 ***** 2026-02-28 00:57:33.318418 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-28 00:57:33.318435 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-28 00:57:33.318446 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-28 00:57:33.318454 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2025.1', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-28 00:57:33.318463 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:57:33.318493 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-28 00:57:33.318515 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-28 00:57:33.318530 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-28 00:57:33.318539 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2025.1', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-28 00:57:33.318548 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:57:33.318557 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-28 00:57:33.318655 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-28 00:57:33.318674 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-28 00:57:33.318684 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2025.1', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-28 00:57:33.318690 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:57:33.318696 | orchestrator | 2026-02-28 00:57:33.318701 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2026-02-28 00:57:33.318707 | orchestrator | Saturday 28 February 2026 00:54:52 +0000 (0:00:00.766) 0:04:53.205 ***** 2026-02-28 00:57:33.318713 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-28 00:57:33.318720 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-28 00:57:33.318726 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-28 00:57:33.318731 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-28 00:57:33.318737 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:57:33.318743 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-28 00:57:33.318748 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-28 00:57:33.318754 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-28 00:57:33.318782 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-28 00:57:33.318788 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-28 00:57:33.318794 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:57:33.318800 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-28 00:57:33.318805 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-28 00:57:33.318811 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-28 00:57:33.318817 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:57:33.318822 | orchestrator | 2026-02-28 00:57:33.318828 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2026-02-28 00:57:33.318833 | orchestrator | Saturday 28 February 2026 00:54:53 +0000 (0:00:00.969) 0:04:54.174 ***** 2026-02-28 00:57:33.318839 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:57:33.318844 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:57:33.318850 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:57:33.318855 | orchestrator | 2026-02-28 00:57:33.318861 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2026-02-28 00:57:33.318866 | orchestrator | Saturday 28 February 2026 00:54:54 +0000 (0:00:01.664) 0:04:55.839 ***** 2026-02-28 00:57:33.318872 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:57:33.318880 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:57:33.318886 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:57:33.318892 | orchestrator | 2026-02-28 00:57:33.318897 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2026-02-28 00:57:33.318902 | orchestrator | Saturday 28 February 2026 00:54:56 +0000 (0:00:02.209) 0:04:58.049 ***** 2026-02-28 00:57:33.318908 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:57:33.318914 | orchestrator | 2026-02-28 00:57:33.318923 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2026-02-28 00:57:33.318932 | orchestrator | Saturday 28 February 2026 00:54:58 +0000 (0:00:01.594) 0:04:59.644 ***** 2026-02-28 00:57:33.318940 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-2, testbed-node-1 => (item=nova-novncproxy) 2026-02-28 00:57:33.318949 | orchestrator | 2026-02-28 00:57:33.318957 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2026-02-28 00:57:33.318966 | orchestrator | Saturday 28 February 2026 00:55:00 +0000 (0:00:01.437) 0:05:01.081 ***** 2026-02-28 00:57:33.318976 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-02-28 00:57:33.318992 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-02-28 00:57:33.319030 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-02-28 00:57:33.319041 | orchestrator | 2026-02-28 00:57:33.319049 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2026-02-28 00:57:33.319057 | orchestrator | Saturday 28 February 2026 00:55:03 +0000 (0:00:03.568) 0:05:04.649 ***** 2026-02-28 00:57:33.319063 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-28 00:57:33.319068 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:57:33.319073 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-28 00:57:33.319078 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:57:33.319083 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-28 00:57:33.319092 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:57:33.319097 | orchestrator | 2026-02-28 00:57:33.319102 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2026-02-28 00:57:33.319106 | orchestrator | Saturday 28 February 2026 00:55:04 +0000 (0:00:01.251) 0:05:05.901 ***** 2026-02-28 00:57:33.319111 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-28 00:57:33.319117 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-28 00:57:33.319128 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:57:33.319133 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-28 00:57:33.319138 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-28 00:57:33.319143 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:57:33.319148 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-28 00:57:33.319153 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-28 00:57:33.319158 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:57:33.319163 | orchestrator | 2026-02-28 00:57:33.319168 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-02-28 00:57:33.319173 | orchestrator | Saturday 28 February 2026 00:55:06 +0000 (0:00:01.805) 0:05:07.706 ***** 2026-02-28 00:57:33.319178 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:57:33.319183 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:57:33.319188 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:57:33.319193 | orchestrator | 2026-02-28 00:57:33.319198 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-02-28 00:57:33.319217 | orchestrator | Saturday 28 February 2026 00:55:09 +0000 (0:00:02.702) 0:05:10.408 ***** 2026-02-28 00:57:33.319223 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:57:33.319228 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:57:33.319233 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:57:33.319237 | orchestrator | 2026-02-28 00:57:33.319242 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2026-02-28 00:57:33.319247 | orchestrator | Saturday 28 February 2026 00:55:12 +0000 (0:00:03.539) 0:05:13.947 ***** 2026-02-28 00:57:33.319253 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2026-02-28 00:57:33.319258 | orchestrator | 2026-02-28 00:57:33.319262 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2026-02-28 00:57:33.319267 | orchestrator | Saturday 28 February 2026 00:55:13 +0000 (0:00:00.932) 0:05:14.880 ***** 2026-02-28 00:57:33.319272 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-28 00:57:33.319278 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:57:33.319283 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-28 00:57:33.319292 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:57:33.319300 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-28 00:57:33.319305 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:57:33.319310 | orchestrator | 2026-02-28 00:57:33.319315 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2026-02-28 00:57:33.319320 | orchestrator | Saturday 28 February 2026 00:55:15 +0000 (0:00:01.492) 0:05:16.373 ***** 2026-02-28 00:57:33.319325 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-28 00:57:33.319330 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:57:33.319335 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-28 00:57:33.319340 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:57:33.319358 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-28 00:57:33.319364 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:57:33.319369 | orchestrator | 2026-02-28 00:57:33.319374 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2026-02-28 00:57:33.319379 | orchestrator | Saturday 28 February 2026 00:55:17 +0000 (0:00:01.851) 0:05:18.224 ***** 2026-02-28 00:57:33.319384 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:57:33.319388 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:57:33.319393 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:57:33.319398 | orchestrator | 2026-02-28 00:57:33.319403 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-02-28 00:57:33.319408 | orchestrator | Saturday 28 February 2026 00:55:18 +0000 (0:00:01.299) 0:05:19.524 ***** 2026-02-28 00:57:33.319413 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:57:33.319418 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:57:33.319423 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:57:33.319428 | orchestrator | 2026-02-28 00:57:33.319433 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-02-28 00:57:33.319437 | orchestrator | Saturday 28 February 2026 00:55:20 +0000 (0:00:02.413) 0:05:21.937 ***** 2026-02-28 00:57:33.319442 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:57:33.319451 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:57:33.319456 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:57:33.319461 | orchestrator | 2026-02-28 00:57:33.319466 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2026-02-28 00:57:33.319471 | orchestrator | Saturday 28 February 2026 00:55:24 +0000 (0:00:03.344) 0:05:25.282 ***** 2026-02-28 00:57:33.319476 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2026-02-28 00:57:33.319481 | orchestrator | 2026-02-28 00:57:33.319485 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2026-02-28 00:57:33.319490 | orchestrator | Saturday 28 February 2026 00:55:25 +0000 (0:00:01.158) 0:05:26.440 ***** 2026-02-28 00:57:33.319501 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-28 00:57:33.319506 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:57:33.319511 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-28 00:57:33.319516 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:57:33.319521 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-28 00:57:33.319526 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:57:33.319531 | orchestrator | 2026-02-28 00:57:33.319536 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2026-02-28 00:57:33.319541 | orchestrator | Saturday 28 February 2026 00:55:27 +0000 (0:00:01.702) 0:05:28.142 ***** 2026-02-28 00:57:33.319546 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-28 00:57:33.319551 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:57:33.319570 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-28 00:57:33.319579 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:57:33.319602 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-28 00:57:33.319607 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:57:33.319612 | orchestrator | 2026-02-28 00:57:33.319617 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2026-02-28 00:57:33.319622 | orchestrator | Saturday 28 February 2026 00:55:28 +0000 (0:00:01.404) 0:05:29.547 ***** 2026-02-28 00:57:33.319627 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:57:33.319632 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:57:33.319637 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:57:33.319641 | orchestrator | 2026-02-28 00:57:33.319646 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-02-28 00:57:33.319651 | orchestrator | Saturday 28 February 2026 00:55:30 +0000 (0:00:01.791) 0:05:31.339 ***** 2026-02-28 00:57:33.319656 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:57:33.319661 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:57:33.319666 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:57:33.319671 | orchestrator | 2026-02-28 00:57:33.319676 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-02-28 00:57:33.319681 | orchestrator | Saturday 28 February 2026 00:55:33 +0000 (0:00:02.805) 0:05:34.145 ***** 2026-02-28 00:57:33.319685 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:57:33.319690 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:57:33.319695 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:57:33.319700 | orchestrator | 2026-02-28 00:57:33.319705 | orchestrator | TASK [include_role : octavia] ************************************************** 2026-02-28 00:57:33.319713 | orchestrator | Saturday 28 February 2026 00:55:36 +0000 (0:00:03.310) 0:05:37.455 ***** 2026-02-28 00:57:33.319718 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:57:33.319723 | orchestrator | 2026-02-28 00:57:33.319728 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2026-02-28 00:57:33.319733 | orchestrator | Saturday 28 February 2026 00:55:37 +0000 (0:00:01.375) 0:05:38.830 ***** 2026-02-28 00:57:33.319738 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-28 00:57:33.319744 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-28 00:57:33.319776 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-28 00:57:33.319785 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-28 00:57:33.319794 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-28 00:57:33.319807 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-28 00:57:33.319815 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-28 00:57:33.319824 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-28 00:57:33.319859 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-28 00:57:33.319868 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-28 00:57:33.319873 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-28 00:57:33.319882 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-28 00:57:33.319887 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-28 00:57:33.319892 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-28 00:57:33.319901 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-28 00:57:33.319906 | orchestrator | 2026-02-28 00:57:33.319925 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2026-02-28 00:57:33.319930 | orchestrator | Saturday 28 February 2026 00:55:41 +0000 (0:00:04.121) 0:05:42.952 ***** 2026-02-28 00:57:33.319936 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-28 00:57:33.319941 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-28 00:57:33.319949 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-28 00:57:33.319954 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-28 00:57:33.319959 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-28 00:57:33.319968 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:57:33.319987 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-28 00:57:33.319993 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-28 00:57:33.319998 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-28 00:57:33.320007 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-28 00:57:33.320012 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-28 00:57:33.320017 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:57:33.320025 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-28 00:57:33.320043 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-28 00:57:33.320049 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-28 00:57:33.320054 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-28 00:57:33.320062 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-28 00:57:33.320067 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:57:33.320072 | orchestrator | 2026-02-28 00:57:33.320077 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2026-02-28 00:57:33.320082 | orchestrator | Saturday 28 February 2026 00:55:43 +0000 (0:00:01.320) 0:05:44.272 ***** 2026-02-28 00:57:33.320088 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-28 00:57:33.320094 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-28 00:57:33.320103 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-28 00:57:33.320108 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:57:33.320113 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-28 00:57:33.320118 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:57:33.320123 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-28 00:57:33.320128 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-28 00:57:33.320132 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:57:33.320137 | orchestrator | 2026-02-28 00:57:33.320142 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2026-02-28 00:57:33.320147 | orchestrator | Saturday 28 February 2026 00:55:44 +0000 (0:00:00.958) 0:05:45.231 ***** 2026-02-28 00:57:33.320152 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:57:33.320157 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:57:33.320162 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:57:33.320167 | orchestrator | 2026-02-28 00:57:33.320172 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2026-02-28 00:57:33.320176 | orchestrator | Saturday 28 February 2026 00:55:45 +0000 (0:00:01.399) 0:05:46.631 ***** 2026-02-28 00:57:33.320195 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:57:33.320201 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:57:33.320206 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:57:33.320211 | orchestrator | 2026-02-28 00:57:33.320216 | orchestrator | TASK [include_role : opensearch] *********************************************** 2026-02-28 00:57:33.320220 | orchestrator | Saturday 28 February 2026 00:55:47 +0000 (0:00:02.334) 0:05:48.965 ***** 2026-02-28 00:57:33.320225 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:57:33.320230 | orchestrator | 2026-02-28 00:57:33.320235 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2026-02-28 00:57:33.320240 | orchestrator | Saturday 28 February 2026 00:55:49 +0000 (0:00:01.876) 0:05:50.842 ***** 2026-02-28 00:57:33.320245 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-28 00:57:33.320255 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-28 00:57:33.320266 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-28 00:57:33.320286 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-28 00:57:33.320292 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-28 00:57:33.320301 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-28 00:57:33.320310 | orchestrator | 2026-02-28 00:57:33.320315 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2026-02-28 00:57:33.320320 | orchestrator | Saturday 28 February 2026 00:55:55 +0000 (0:00:05.734) 0:05:56.576 ***** 2026-02-28 00:57:33.320325 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-02-28 00:57:33.320344 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-02-28 00:57:33.320350 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:57:33.320355 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-02-28 00:57:33.320368 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-02-28 00:57:33.320373 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:57:33.320379 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-02-28 00:57:33.320399 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-02-28 00:57:33.320404 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:57:33.320410 | orchestrator | 2026-02-28 00:57:33.320415 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2026-02-28 00:57:33.320420 | orchestrator | Saturday 28 February 2026 00:55:56 +0000 (0:00:01.141) 0:05:57.718 ***** 2026-02-28 00:57:33.320425 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}})  2026-02-28 00:57:33.320436 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-02-28 00:57:33.320442 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-02-28 00:57:33.320447 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:57:33.320455 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}})  2026-02-28 00:57:33.320460 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-02-28 00:57:33.320465 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-02-28 00:57:33.320470 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:57:33.320475 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}})  2026-02-28 00:57:33.320480 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-02-28 00:57:33.320485 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-02-28 00:57:33.320490 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:57:33.320495 | orchestrator | 2026-02-28 00:57:33.320500 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2026-02-28 00:57:33.320505 | orchestrator | Saturday 28 February 2026 00:55:58 +0000 (0:00:01.526) 0:05:59.245 ***** 2026-02-28 00:57:33.320510 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:57:33.320514 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:57:33.320519 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:57:33.320524 | orchestrator | 2026-02-28 00:57:33.320529 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2026-02-28 00:57:33.320534 | orchestrator | Saturday 28 February 2026 00:55:58 +0000 (0:00:00.467) 0:05:59.712 ***** 2026-02-28 00:57:33.320552 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:57:33.320557 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:57:33.320562 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:57:33.320567 | orchestrator | 2026-02-28 00:57:33.320572 | orchestrator | TASK [include_role : prometheus] *********************************************** 2026-02-28 00:57:33.320577 | orchestrator | Saturday 28 February 2026 00:56:00 +0000 (0:00:01.446) 0:06:01.158 ***** 2026-02-28 00:57:33.320607 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:57:33.320615 | orchestrator | 2026-02-28 00:57:33.320623 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2026-02-28 00:57:33.320631 | orchestrator | Saturday 28 February 2026 00:56:01 +0000 (0:00:01.830) 0:06:02.989 ***** 2026-02-28 00:57:33.320640 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-server:2025.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-02-28 00:57:33.320651 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-28 00:57:33.320658 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:57:33.320663 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:57:33.320686 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-server:2025.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-02-28 00:57:33.320696 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-28 00:57:33.320702 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-28 00:57:33.320707 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:57:33.320715 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:57:33.320720 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-28 00:57:33.320726 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-server:2025.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-02-28 00:57:33.320744 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-28 00:57:33.320754 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:57:33.320759 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:57:33.320764 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-28 00:57:33.320772 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2025.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-02-28 00:57:33.320778 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-02-28 00:57:33.320801 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:57:33.320807 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2025.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-02-28 00:57:33.320813 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:57:33.320821 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-02-28 00:57:33.320826 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-28 00:57:33.320831 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:57:33.320856 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:57:33.320862 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-28 00:57:33.320867 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2025.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-02-28 00:57:33.320875 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-02-28 00:57:33.320880 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:57:33.320885 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:57:33.320894 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-28 00:57:33.320899 | orchestrator | 2026-02-28 00:57:33.320917 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2026-02-28 00:57:33.320922 | orchestrator | Saturday 28 February 2026 00:56:06 +0000 (0:00:04.555) 0:06:07.545 ***** 2026-02-28 00:57:33.320928 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-server:2025.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-02-28 00:57:33.320933 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-28 00:57:33.320941 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:57:33.320946 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:57:33.320951 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-28 00:57:33.320974 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2025.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-02-28 00:57:33.320981 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-02-28 00:57:33.320986 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:57:33.320992 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:57:33.320997 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-28 00:57:33.321002 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:57:33.321021 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-server:2025.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-02-28 00:57:33.321033 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-28 00:57:33.321039 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:57:33.321044 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:57:33.321049 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-28 00:57:33.321056 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2025.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-02-28 00:57:33.321066 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-server:2025.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-02-28 00:57:33.321075 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-02-28 00:57:33.321081 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-28 00:57:33.321086 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:57:33.321093 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:57:33.321099 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:57:33.321107 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:57:33.321112 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-28 00:57:33.321124 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-28 00:57:33.321129 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:57:33.321134 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2025.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-02-28 00:57:33.321142 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-02-28 00:57:33.321148 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:57:33.321156 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:57:33.321161 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-28 00:57:33.321166 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:57:33.321171 | orchestrator | 2026-02-28 00:57:33.321176 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2026-02-28 00:57:33.321181 | orchestrator | Saturday 28 February 2026 00:56:07 +0000 (0:00:00.915) 0:06:08.460 ***** 2026-02-28 00:57:33.321189 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-02-28 00:57:33.321194 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-02-28 00:57:33.321200 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-02-28 00:57:33.321205 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-02-28 00:57:33.321210 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-02-28 00:57:33.321218 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-02-28 00:57:33.321227 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:57:33.321232 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-02-28 00:57:33.321237 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-02-28 00:57:33.321242 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:57:33.321247 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-02-28 00:57:33.321252 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-02-28 00:57:33.321257 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-02-28 00:57:33.321264 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-02-28 00:57:33.321270 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:57:33.321275 | orchestrator | 2026-02-28 00:57:33.321279 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2026-02-28 00:57:33.321284 | orchestrator | Saturday 28 February 2026 00:56:08 +0000 (0:00:01.437) 0:06:09.898 ***** 2026-02-28 00:57:33.321289 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:57:33.321294 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:57:33.321299 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:57:33.321304 | orchestrator | 2026-02-28 00:57:33.321309 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2026-02-28 00:57:33.321314 | orchestrator | Saturday 28 February 2026 00:56:09 +0000 (0:00:00.574) 0:06:10.473 ***** 2026-02-28 00:57:33.321319 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:57:33.321324 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:57:33.321328 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:57:33.321333 | orchestrator | 2026-02-28 00:57:33.321338 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2026-02-28 00:57:33.321343 | orchestrator | Saturday 28 February 2026 00:56:10 +0000 (0:00:01.445) 0:06:11.918 ***** 2026-02-28 00:57:33.321348 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:57:33.321353 | orchestrator | 2026-02-28 00:57:33.321358 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2026-02-28 00:57:33.321363 | orchestrator | Saturday 28 February 2026 00:56:12 +0000 (0:00:01.541) 0:06:13.460 ***** 2026-02-28 00:57:33.321375 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-28 00:57:33.321381 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-28 00:57:33.321389 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-28 00:57:33.321394 | orchestrator | 2026-02-28 00:57:33.321399 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2026-02-28 00:57:33.321404 | orchestrator | Saturday 28 February 2026 00:56:15 +0000 (0:00:02.648) 0:06:16.109 ***** 2026-02-28 00:57:33.321409 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-28 00:57:33.321417 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:57:33.321425 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-28 00:57:33.321431 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:57:33.321436 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-28 00:57:33.321441 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:57:33.321446 | orchestrator | 2026-02-28 00:57:33.321451 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2026-02-28 00:57:33.321456 | orchestrator | Saturday 28 February 2026 00:56:15 +0000 (0:00:00.446) 0:06:16.555 ***** 2026-02-28 00:57:33.321461 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-02-28 00:57:33.321466 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:57:33.321471 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-02-28 00:57:33.321475 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:57:33.321480 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-02-28 00:57:33.321485 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:57:33.321490 | orchestrator | 2026-02-28 00:57:33.321497 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2026-02-28 00:57:33.321502 | orchestrator | Saturday 28 February 2026 00:56:16 +0000 (0:00:00.637) 0:06:17.193 ***** 2026-02-28 00:57:33.321507 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:57:33.321512 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:57:33.321517 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:57:33.321522 | orchestrator | 2026-02-28 00:57:33.321527 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2026-02-28 00:57:33.321532 | orchestrator | Saturday 28 February 2026 00:56:17 +0000 (0:00:00.880) 0:06:18.074 ***** 2026-02-28 00:57:33.321540 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:57:33.321544 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:57:33.321549 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:57:33.321554 | orchestrator | 2026-02-28 00:57:33.321559 | orchestrator | TASK [include_role : skyline] ************************************************** 2026-02-28 00:57:33.321564 | orchestrator | Saturday 28 February 2026 00:56:18 +0000 (0:00:01.428) 0:06:19.502 ***** 2026-02-28 00:57:33.321569 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:57:33.321574 | orchestrator | 2026-02-28 00:57:33.321579 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2026-02-28 00:57:33.321599 | orchestrator | Saturday 28 February 2026 00:56:19 +0000 (0:00:01.539) 0:06:21.041 ***** 2026-02-28 00:57:33.321607 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2025.1', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-02-28 00:57:33.321613 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2025.1', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-02-28 00:57:33.321618 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2025.1', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-02-28 00:57:33.321627 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2025.1', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-02-28 00:57:33.321639 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2025.1', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-02-28 00:57:33.321644 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2025.1', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-02-28 00:57:33.321650 | orchestrator | 2026-02-28 00:57:33.321655 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2026-02-28 00:57:33.321660 | orchestrator | Saturday 28 February 2026 00:56:27 +0000 (0:00:07.176) 0:06:28.217 ***** 2026-02-28 00:57:33.321667 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2025.1', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}})  2026-02-28 00:57:33.321677 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2025.1', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-02-28 00:57:33.321682 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:57:33.321690 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2025.1', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}})  2026-02-28 00:57:33.321695 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2025.1', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-02-28 00:57:33.321701 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:57:33.321708 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2025.1', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}})  2026-02-28 00:57:33.321717 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2025.1', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-02-28 00:57:33.321722 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:57:33.321727 | orchestrator | 2026-02-28 00:57:33.321732 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2026-02-28 00:57:33.321737 | orchestrator | Saturday 28 February 2026 00:56:28 +0000 (0:00:01.201) 0:06:29.419 ***** 2026-02-28 00:57:33.321742 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-02-28 00:57:33.321750 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-02-28 00:57:33.321756 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-02-28 00:57:33.321761 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-02-28 00:57:33.321766 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-02-28 00:57:33.321771 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:57:33.321776 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-02-28 00:57:33.321781 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-02-28 00:57:33.321790 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-02-28 00:57:33.321795 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:57:33.321800 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-02-28 00:57:33.321808 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-02-28 00:57:33.321813 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-02-28 00:57:33.321818 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-02-28 00:57:33.321823 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:57:33.321828 | orchestrator | 2026-02-28 00:57:33.321833 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2026-02-28 00:57:33.321838 | orchestrator | Saturday 28 February 2026 00:56:29 +0000 (0:00:01.030) 0:06:30.449 ***** 2026-02-28 00:57:33.321843 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:57:33.321848 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:57:33.321853 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:57:33.321857 | orchestrator | 2026-02-28 00:57:33.321862 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2026-02-28 00:57:33.321867 | orchestrator | Saturday 28 February 2026 00:56:30 +0000 (0:00:01.228) 0:06:31.677 ***** 2026-02-28 00:57:33.321872 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:57:33.321877 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:57:33.321882 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:57:33.321886 | orchestrator | 2026-02-28 00:57:33.321891 | orchestrator | TASK [include_role : tacker] *************************************************** 2026-02-28 00:57:33.321896 | orchestrator | Saturday 28 February 2026 00:56:32 +0000 (0:00:02.336) 0:06:34.014 ***** 2026-02-28 00:57:33.321901 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:57:33.321906 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:57:33.321911 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:57:33.321915 | orchestrator | 2026-02-28 00:57:33.321920 | orchestrator | TASK [include_role : trove] **************************************************** 2026-02-28 00:57:33.321925 | orchestrator | Saturday 28 February 2026 00:56:33 +0000 (0:00:00.704) 0:06:34.718 ***** 2026-02-28 00:57:33.321930 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:57:33.321935 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:57:33.321940 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:57:33.321944 | orchestrator | 2026-02-28 00:57:33.321952 | orchestrator | TASK [include_role : venus] **************************************************** 2026-02-28 00:57:33.321957 | orchestrator | Saturday 28 February 2026 00:56:33 +0000 (0:00:00.330) 0:06:35.049 ***** 2026-02-28 00:57:33.321962 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:57:33.321967 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:57:33.321972 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:57:33.321976 | orchestrator | 2026-02-28 00:57:33.321981 | orchestrator | TASK [include_role : watcher] ************************************************** 2026-02-28 00:57:33.321986 | orchestrator | Saturday 28 February 2026 00:56:34 +0000 (0:00:00.363) 0:06:35.413 ***** 2026-02-28 00:57:33.321994 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:57:33.321999 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:57:33.322004 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:57:33.322009 | orchestrator | 2026-02-28 00:57:33.322013 | orchestrator | TASK [include_role : zun] ****************************************************** 2026-02-28 00:57:33.322041 | orchestrator | Saturday 28 February 2026 00:56:34 +0000 (0:00:00.391) 0:06:35.805 ***** 2026-02-28 00:57:33.322046 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:57:33.322051 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:57:33.322055 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:57:33.322060 | orchestrator | 2026-02-28 00:57:33.322065 | orchestrator | TASK [include_role : loadbalancer] ********************************************* 2026-02-28 00:57:33.322070 | orchestrator | Saturday 28 February 2026 00:56:35 +0000 (0:00:00.760) 0:06:36.566 ***** 2026-02-28 00:57:33.322074 | orchestrator | included: loadbalancer for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:57:33.322079 | orchestrator | 2026-02-28 00:57:33.322084 | orchestrator | TASK [service-check-containers : loadbalancer | Check containers] ************** 2026-02-28 00:57:33.322089 | orchestrator | Saturday 28 February 2026 00:56:37 +0000 (0:00:01.533) 0:06:38.099 ***** 2026-02-28 00:57:33.322094 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-02-28 00:57:33.322103 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-02-28 00:57:33.322108 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-02-28 00:57:33.322113 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-28 00:57:33.322123 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-28 00:57:33.322132 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-28 00:57:33.322137 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-28 00:57:33.322142 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-28 00:57:33.322149 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-28 00:57:33.322154 | orchestrator | 2026-02-28 00:57:33.322159 | orchestrator | TASK [service-check-containers : loadbalancer | Notify handlers to restart containers] *** 2026-02-28 00:57:33.322164 | orchestrator | Saturday 28 February 2026 00:56:39 +0000 (0:00:02.523) 0:06:40.622 ***** 2026-02-28 00:57:33.322169 | orchestrator | changed: [testbed-node-0] => { 2026-02-28 00:57:33.322174 | orchestrator |  "msg": "Notifying handlers" 2026-02-28 00:57:33.322179 | orchestrator | } 2026-02-28 00:57:33.322184 | orchestrator | changed: [testbed-node-1] => { 2026-02-28 00:57:33.322189 | orchestrator |  "msg": "Notifying handlers" 2026-02-28 00:57:33.322193 | orchestrator | } 2026-02-28 00:57:33.322198 | orchestrator | changed: [testbed-node-2] => { 2026-02-28 00:57:33.322203 | orchestrator |  "msg": "Notifying handlers" 2026-02-28 00:57:33.322208 | orchestrator | } 2026-02-28 00:57:33.322213 | orchestrator | 2026-02-28 00:57:33.322217 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-28 00:57:33.322222 | orchestrator | Saturday 28 February 2026 00:56:40 +0000 (0:00:00.762) 0:06:41.385 ***** 2026-02-28 00:57:33.322227 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-28 00:57:33.322239 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-28 00:57:33.322244 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-28 00:57:33.322249 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:57:33.322254 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-28 00:57:33.322259 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-28 00:57:33.322267 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-28 00:57:33.322272 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:57:33.322277 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-28 00:57:33.322287 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-28 00:57:33.322295 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-28 00:57:33.322300 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:57:33.322305 | orchestrator | 2026-02-28 00:57:33.322309 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2026-02-28 00:57:33.322314 | orchestrator | Saturday 28 February 2026 00:56:42 +0000 (0:00:01.679) 0:06:43.065 ***** 2026-02-28 00:57:33.322319 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:57:33.322324 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:57:33.322329 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:57:33.322334 | orchestrator | 2026-02-28 00:57:33.322339 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2026-02-28 00:57:33.322344 | orchestrator | Saturday 28 February 2026 00:56:42 +0000 (0:00:00.746) 0:06:43.812 ***** 2026-02-28 00:57:33.322348 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:57:33.322353 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:57:33.322358 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:57:33.322363 | orchestrator | 2026-02-28 00:57:33.322368 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2026-02-28 00:57:33.322373 | orchestrator | Saturday 28 February 2026 00:56:43 +0000 (0:00:00.390) 0:06:44.203 ***** 2026-02-28 00:57:33.322377 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:57:33.322382 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:57:33.322387 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:57:33.322391 | orchestrator | 2026-02-28 00:57:33.322396 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2026-02-28 00:57:33.322401 | orchestrator | Saturday 28 February 2026 00:56:44 +0000 (0:00:01.055) 0:06:45.258 ***** 2026-02-28 00:57:33.322406 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:57:33.322410 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:57:33.322415 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:57:33.322420 | orchestrator | 2026-02-28 00:57:33.322425 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2026-02-28 00:57:33.322430 | orchestrator | Saturday 28 February 2026 00:56:45 +0000 (0:00:01.337) 0:06:46.596 ***** 2026-02-28 00:57:33.322434 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:57:33.322439 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:57:33.322444 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:57:33.322449 | orchestrator | 2026-02-28 00:57:33.322453 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2026-02-28 00:57:33.322458 | orchestrator | Saturday 28 February 2026 00:56:46 +0000 (0:00:00.936) 0:06:47.532 ***** 2026-02-28 00:57:33.322463 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:57:33.322468 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:57:33.322472 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:57:33.322481 | orchestrator | 2026-02-28 00:57:33.322486 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2026-02-28 00:57:33.322491 | orchestrator | Saturday 28 February 2026 00:56:57 +0000 (0:00:11.032) 0:06:58.564 ***** 2026-02-28 00:57:33.322495 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:57:33.322500 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:57:33.322505 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:57:33.322510 | orchestrator | 2026-02-28 00:57:33.322514 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2026-02-28 00:57:33.322522 | orchestrator | Saturday 28 February 2026 00:56:58 +0000 (0:00:00.828) 0:06:59.393 ***** 2026-02-28 00:57:33.322527 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:57:33.322532 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:57:33.322536 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:57:33.322541 | orchestrator | 2026-02-28 00:57:33.322546 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2026-02-28 00:57:33.322551 | orchestrator | Saturday 28 February 2026 00:57:15 +0000 (0:00:16.733) 0:07:16.127 ***** 2026-02-28 00:57:33.322556 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:57:33.322560 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:57:33.322565 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:57:33.322570 | orchestrator | 2026-02-28 00:57:33.322575 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2026-02-28 00:57:33.322580 | orchestrator | Saturday 28 February 2026 00:57:16 +0000 (0:00:01.207) 0:07:17.335 ***** 2026-02-28 00:57:33.322624 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:57:33.322629 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:57:33.322634 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:57:33.322639 | orchestrator | 2026-02-28 00:57:33.322644 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2026-02-28 00:57:33.322649 | orchestrator | Saturday 28 February 2026 00:57:25 +0000 (0:00:09.158) 0:07:26.494 ***** 2026-02-28 00:57:33.322653 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:57:33.322658 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:57:33.322663 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:57:33.322668 | orchestrator | 2026-02-28 00:57:33.322672 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2026-02-28 00:57:33.322677 | orchestrator | Saturday 28 February 2026 00:57:25 +0000 (0:00:00.378) 0:07:26.872 ***** 2026-02-28 00:57:33.322682 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:57:33.322687 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:57:33.322692 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:57:33.322696 | orchestrator | 2026-02-28 00:57:33.322701 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2026-02-28 00:57:33.322706 | orchestrator | Saturday 28 February 2026 00:57:26 +0000 (0:00:00.404) 0:07:27.276 ***** 2026-02-28 00:57:33.322711 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:57:33.322715 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:57:33.322720 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:57:33.322725 | orchestrator | 2026-02-28 00:57:33.322730 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2026-02-28 00:57:33.322734 | orchestrator | Saturday 28 February 2026 00:57:26 +0000 (0:00:00.756) 0:07:28.032 ***** 2026-02-28 00:57:33.322739 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:57:33.322744 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:57:33.322749 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:57:33.322753 | orchestrator | 2026-02-28 00:57:33.322762 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2026-02-28 00:57:33.322767 | orchestrator | Saturday 28 February 2026 00:57:27 +0000 (0:00:00.378) 0:07:28.411 ***** 2026-02-28 00:57:33.322772 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:57:33.322777 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:57:33.322781 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:57:33.322786 | orchestrator | 2026-02-28 00:57:33.322795 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2026-02-28 00:57:33.322800 | orchestrator | Saturday 28 February 2026 00:57:27 +0000 (0:00:00.380) 0:07:28.792 ***** 2026-02-28 00:57:33.322805 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:57:33.322809 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:57:33.322814 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:57:33.322819 | orchestrator | 2026-02-28 00:57:33.322824 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2026-02-28 00:57:33.322829 | orchestrator | Saturday 28 February 2026 00:57:28 +0000 (0:00:00.395) 0:07:29.187 ***** 2026-02-28 00:57:33.322833 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:57:33.322838 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:57:33.322843 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:57:33.322848 | orchestrator | 2026-02-28 00:57:33.322852 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2026-02-28 00:57:33.322857 | orchestrator | Saturday 28 February 2026 00:57:29 +0000 (0:00:01.426) 0:07:30.614 ***** 2026-02-28 00:57:33.322862 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:57:33.322867 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:57:33.322872 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:57:33.322876 | orchestrator | 2026-02-28 00:57:33.322881 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-28 00:57:33.322886 | orchestrator | testbed-node-0 : ok=127  changed=79  unreachable=0 failed=0 skipped=94  rescued=0 ignored=0 2026-02-28 00:57:33.322891 | orchestrator | testbed-node-1 : ok=126  changed=79  unreachable=0 failed=0 skipped=94  rescued=0 ignored=0 2026-02-28 00:57:33.322896 | orchestrator | testbed-node-2 : ok=126  changed=79  unreachable=0 failed=0 skipped=94  rescued=0 ignored=0 2026-02-28 00:57:33.322901 | orchestrator | 2026-02-28 00:57:33.322906 | orchestrator | 2026-02-28 00:57:33.322911 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-28 00:57:33.322915 | orchestrator | Saturday 28 February 2026 00:57:30 +0000 (0:00:00.989) 0:07:31.603 ***** 2026-02-28 00:57:33.322920 | orchestrator | =============================================================================== 2026-02-28 00:57:33.322925 | orchestrator | loadbalancer : Start backup proxysql container ------------------------- 16.73s 2026-02-28 00:57:33.322930 | orchestrator | loadbalancer : Start backup haproxy container -------------------------- 11.03s 2026-02-28 00:57:33.322934 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 9.16s 2026-02-28 00:57:33.322939 | orchestrator | haproxy-config : Copying over barbican haproxy config ------------------- 7.54s 2026-02-28 00:57:33.322947 | orchestrator | haproxy-config : Copying over ceph-rgw haproxy config ------------------- 7.49s 2026-02-28 00:57:33.322952 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 7.18s 2026-02-28 00:57:33.322957 | orchestrator | loadbalancer : Copying over proxysql config ----------------------------- 6.32s 2026-02-28 00:57:33.322962 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 6.28s 2026-02-28 00:57:33.322966 | orchestrator | haproxy-config : Copying over horizon haproxy config -------------------- 6.25s 2026-02-28 00:57:33.322971 | orchestrator | haproxy-config : Copying over manila haproxy config --------------------- 5.78s 2026-02-28 00:57:33.322976 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 5.73s 2026-02-28 00:57:33.322980 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 5.61s 2026-02-28 00:57:33.322985 | orchestrator | haproxy-config : Copying over aodh haproxy config ----------------------- 5.26s 2026-02-28 00:57:33.322990 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 5.05s 2026-02-28 00:57:33.322995 | orchestrator | haproxy-config : Copying over keystone haproxy config ------------------- 5.02s 2026-02-28 00:57:33.323000 | orchestrator | loadbalancer : Copying checks for services which are enabled ------------ 5.00s 2026-02-28 00:57:33.323008 | orchestrator | loadbalancer : Copying over config.json files for services -------------- 4.58s 2026-02-28 00:57:33.323013 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 4.56s 2026-02-28 00:57:33.323018 | orchestrator | haproxy-config : Configuring firewall for glance ------------------------ 4.48s 2026-02-28 00:57:33.323023 | orchestrator | haproxy-config : Copying over grafana haproxy config -------------------- 4.45s 2026-02-28 00:57:33.323027 | orchestrator | 2026-02-28 00:57:33 | INFO  | Task 6cfd6d28-095d-4fda-af77-9bb0780c82cb is in state STARTED 2026-02-28 00:57:33.323032 | orchestrator | 2026-02-28 00:57:33 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:57:36.398309 | orchestrator | 2026-02-28 00:57:36 | INFO  | Task e56ad6c2-cc17-4c48-8b43-dbaf6972bef1 is in state STARTED 2026-02-28 00:57:36.398427 | orchestrator | 2026-02-28 00:57:36 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:57:36.398443 | orchestrator | 2026-02-28 00:57:36 | INFO  | Task 6cfd6d28-095d-4fda-af77-9bb0780c82cb is in state STARTED 2026-02-28 00:57:36.398455 | orchestrator | 2026-02-28 00:57:36 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:57:39.386930 | orchestrator | 2026-02-28 00:57:39 | INFO  | Task e56ad6c2-cc17-4c48-8b43-dbaf6972bef1 is in state STARTED 2026-02-28 00:57:39.387360 | orchestrator | 2026-02-28 00:57:39 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:57:39.389520 | orchestrator | 2026-02-28 00:57:39 | INFO  | Task 6cfd6d28-095d-4fda-af77-9bb0780c82cb is in state STARTED 2026-02-28 00:57:39.389621 | orchestrator | 2026-02-28 00:57:39 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:57:42.431423 | orchestrator | 2026-02-28 00:57:42 | INFO  | Task e56ad6c2-cc17-4c48-8b43-dbaf6972bef1 is in state STARTED 2026-02-28 00:57:42.432093 | orchestrator | 2026-02-28 00:57:42 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:57:42.434265 | orchestrator | 2026-02-28 00:57:42 | INFO  | Task 6cfd6d28-095d-4fda-af77-9bb0780c82cb is in state STARTED 2026-02-28 00:57:42.434315 | orchestrator | 2026-02-28 00:57:42 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:57:45.501131 | orchestrator | 2026-02-28 00:57:45 | INFO  | Task e56ad6c2-cc17-4c48-8b43-dbaf6972bef1 is in state STARTED 2026-02-28 00:57:45.501660 | orchestrator | 2026-02-28 00:57:45 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:57:45.503409 | orchestrator | 2026-02-28 00:57:45 | INFO  | Task 6cfd6d28-095d-4fda-af77-9bb0780c82cb is in state STARTED 2026-02-28 00:57:45.503447 | orchestrator | 2026-02-28 00:57:45 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:57:48.537485 | orchestrator | 2026-02-28 00:57:48 | INFO  | Task e56ad6c2-cc17-4c48-8b43-dbaf6972bef1 is in state STARTED 2026-02-28 00:57:48.538399 | orchestrator | 2026-02-28 00:57:48 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:57:48.540291 | orchestrator | 2026-02-28 00:57:48 | INFO  | Task 6cfd6d28-095d-4fda-af77-9bb0780c82cb is in state STARTED 2026-02-28 00:57:48.540334 | orchestrator | 2026-02-28 00:57:48 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:57:51.582063 | orchestrator | 2026-02-28 00:57:51 | INFO  | Task e56ad6c2-cc17-4c48-8b43-dbaf6972bef1 is in state STARTED 2026-02-28 00:57:51.582900 | orchestrator | 2026-02-28 00:57:51 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:57:51.585640 | orchestrator | 2026-02-28 00:57:51 | INFO  | Task 6cfd6d28-095d-4fda-af77-9bb0780c82cb is in state STARTED 2026-02-28 00:57:51.585683 | orchestrator | 2026-02-28 00:57:51 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:57:54.618621 | orchestrator | 2026-02-28 00:57:54 | INFO  | Task e56ad6c2-cc17-4c48-8b43-dbaf6972bef1 is in state STARTED 2026-02-28 00:57:54.620320 | orchestrator | 2026-02-28 00:57:54 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:57:54.620989 | orchestrator | 2026-02-28 00:57:54 | INFO  | Task 6cfd6d28-095d-4fda-af77-9bb0780c82cb is in state STARTED 2026-02-28 00:57:54.621031 | orchestrator | 2026-02-28 00:57:54 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:57:57.660048 | orchestrator | 2026-02-28 00:57:57 | INFO  | Task e56ad6c2-cc17-4c48-8b43-dbaf6972bef1 is in state STARTED 2026-02-28 00:57:57.661694 | orchestrator | 2026-02-28 00:57:57 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:57:57.663522 | orchestrator | 2026-02-28 00:57:57 | INFO  | Task 6cfd6d28-095d-4fda-af77-9bb0780c82cb is in state STARTED 2026-02-28 00:57:57.663558 | orchestrator | 2026-02-28 00:57:57 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:58:00.691670 | orchestrator | 2026-02-28 00:58:00 | INFO  | Task e56ad6c2-cc17-4c48-8b43-dbaf6972bef1 is in state STARTED 2026-02-28 00:58:00.692130 | orchestrator | 2026-02-28 00:58:00 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:58:00.694161 | orchestrator | 2026-02-28 00:58:00 | INFO  | Task 6cfd6d28-095d-4fda-af77-9bb0780c82cb is in state STARTED 2026-02-28 00:58:00.694214 | orchestrator | 2026-02-28 00:58:00 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:58:03.728737 | orchestrator | 2026-02-28 00:58:03 | INFO  | Task e56ad6c2-cc17-4c48-8b43-dbaf6972bef1 is in state STARTED 2026-02-28 00:58:03.729162 | orchestrator | 2026-02-28 00:58:03 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:58:03.730096 | orchestrator | 2026-02-28 00:58:03 | INFO  | Task 6cfd6d28-095d-4fda-af77-9bb0780c82cb is in state STARTED 2026-02-28 00:58:03.730181 | orchestrator | 2026-02-28 00:58:03 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:58:06.779839 | orchestrator | 2026-02-28 00:58:06 | INFO  | Task e56ad6c2-cc17-4c48-8b43-dbaf6972bef1 is in state STARTED 2026-02-28 00:58:06.783376 | orchestrator | 2026-02-28 00:58:06 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:58:06.786761 | orchestrator | 2026-02-28 00:58:06 | INFO  | Task 6cfd6d28-095d-4fda-af77-9bb0780c82cb is in state STARTED 2026-02-28 00:58:06.787495 | orchestrator | 2026-02-28 00:58:06 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:58:09.841656 | orchestrator | 2026-02-28 00:58:09 | INFO  | Task e56ad6c2-cc17-4c48-8b43-dbaf6972bef1 is in state STARTED 2026-02-28 00:58:09.842536 | orchestrator | 2026-02-28 00:58:09 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:58:09.844571 | orchestrator | 2026-02-28 00:58:09 | INFO  | Task 6cfd6d28-095d-4fda-af77-9bb0780c82cb is in state STARTED 2026-02-28 00:58:09.844710 | orchestrator | 2026-02-28 00:58:09 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:58:12.876765 | orchestrator | 2026-02-28 00:58:12 | INFO  | Task e56ad6c2-cc17-4c48-8b43-dbaf6972bef1 is in state STARTED 2026-02-28 00:58:12.877657 | orchestrator | 2026-02-28 00:58:12 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:58:12.878725 | orchestrator | 2026-02-28 00:58:12 | INFO  | Task 6cfd6d28-095d-4fda-af77-9bb0780c82cb is in state STARTED 2026-02-28 00:58:12.878775 | orchestrator | 2026-02-28 00:58:12 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:58:15.921094 | orchestrator | 2026-02-28 00:58:15 | INFO  | Task e56ad6c2-cc17-4c48-8b43-dbaf6972bef1 is in state STARTED 2026-02-28 00:58:15.922287 | orchestrator | 2026-02-28 00:58:15 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:58:15.922963 | orchestrator | 2026-02-28 00:58:15 | INFO  | Task 6cfd6d28-095d-4fda-af77-9bb0780c82cb is in state STARTED 2026-02-28 00:58:15.923202 | orchestrator | 2026-02-28 00:58:15 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:58:18.962110 | orchestrator | 2026-02-28 00:58:18 | INFO  | Task e56ad6c2-cc17-4c48-8b43-dbaf6972bef1 is in state STARTED 2026-02-28 00:58:18.964741 | orchestrator | 2026-02-28 00:58:18 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:58:18.966911 | orchestrator | 2026-02-28 00:58:18 | INFO  | Task 6cfd6d28-095d-4fda-af77-9bb0780c82cb is in state STARTED 2026-02-28 00:58:18.966965 | orchestrator | 2026-02-28 00:58:18 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:58:22.011223 | orchestrator | 2026-02-28 00:58:22 | INFO  | Task e56ad6c2-cc17-4c48-8b43-dbaf6972bef1 is in state STARTED 2026-02-28 00:58:22.015072 | orchestrator | 2026-02-28 00:58:22 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:58:22.017468 | orchestrator | 2026-02-28 00:58:22 | INFO  | Task 6cfd6d28-095d-4fda-af77-9bb0780c82cb is in state STARTED 2026-02-28 00:58:22.017589 | orchestrator | 2026-02-28 00:58:22 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:58:25.068135 | orchestrator | 2026-02-28 00:58:25 | INFO  | Task e56ad6c2-cc17-4c48-8b43-dbaf6972bef1 is in state STARTED 2026-02-28 00:58:25.068926 | orchestrator | 2026-02-28 00:58:25 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:58:25.072880 | orchestrator | 2026-02-28 00:58:25 | INFO  | Task 6cfd6d28-095d-4fda-af77-9bb0780c82cb is in state STARTED 2026-02-28 00:58:25.072954 | orchestrator | 2026-02-28 00:58:25 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:58:28.119404 | orchestrator | 2026-02-28 00:58:28 | INFO  | Task e56ad6c2-cc17-4c48-8b43-dbaf6972bef1 is in state STARTED 2026-02-28 00:58:28.120945 | orchestrator | 2026-02-28 00:58:28 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:58:28.122842 | orchestrator | 2026-02-28 00:58:28 | INFO  | Task 6cfd6d28-095d-4fda-af77-9bb0780c82cb is in state STARTED 2026-02-28 00:58:28.123324 | orchestrator | 2026-02-28 00:58:28 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:58:31.164327 | orchestrator | 2026-02-28 00:58:31 | INFO  | Task e56ad6c2-cc17-4c48-8b43-dbaf6972bef1 is in state STARTED 2026-02-28 00:58:31.166871 | orchestrator | 2026-02-28 00:58:31 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:58:31.168157 | orchestrator | 2026-02-28 00:58:31 | INFO  | Task 6cfd6d28-095d-4fda-af77-9bb0780c82cb is in state STARTED 2026-02-28 00:58:31.168222 | orchestrator | 2026-02-28 00:58:31 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:58:34.214704 | orchestrator | 2026-02-28 00:58:34 | INFO  | Task e56ad6c2-cc17-4c48-8b43-dbaf6972bef1 is in state STARTED 2026-02-28 00:58:34.217491 | orchestrator | 2026-02-28 00:58:34 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:58:34.220189 | orchestrator | 2026-02-28 00:58:34 | INFO  | Task 6cfd6d28-095d-4fda-af77-9bb0780c82cb is in state STARTED 2026-02-28 00:58:34.220296 | orchestrator | 2026-02-28 00:58:34 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:58:37.270143 | orchestrator | 2026-02-28 00:58:37 | INFO  | Task e56ad6c2-cc17-4c48-8b43-dbaf6972bef1 is in state STARTED 2026-02-28 00:58:37.273115 | orchestrator | 2026-02-28 00:58:37 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:58:37.276039 | orchestrator | 2026-02-28 00:58:37 | INFO  | Task 6cfd6d28-095d-4fda-af77-9bb0780c82cb is in state STARTED 2026-02-28 00:58:37.276104 | orchestrator | 2026-02-28 00:58:37 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:58:40.317567 | orchestrator | 2026-02-28 00:58:40 | INFO  | Task e56ad6c2-cc17-4c48-8b43-dbaf6972bef1 is in state STARTED 2026-02-28 00:58:40.321042 | orchestrator | 2026-02-28 00:58:40 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:58:40.323841 | orchestrator | 2026-02-28 00:58:40 | INFO  | Task 6cfd6d28-095d-4fda-af77-9bb0780c82cb is in state STARTED 2026-02-28 00:58:40.324016 | orchestrator | 2026-02-28 00:58:40 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:58:43.379938 | orchestrator | 2026-02-28 00:58:43 | INFO  | Task e56ad6c2-cc17-4c48-8b43-dbaf6972bef1 is in state STARTED 2026-02-28 00:58:43.383176 | orchestrator | 2026-02-28 00:58:43 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:58:43.384165 | orchestrator | 2026-02-28 00:58:43 | INFO  | Task 6cfd6d28-095d-4fda-af77-9bb0780c82cb is in state STARTED 2026-02-28 00:58:43.384207 | orchestrator | 2026-02-28 00:58:43 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:58:46.437727 | orchestrator | 2026-02-28 00:58:46 | INFO  | Task e56ad6c2-cc17-4c48-8b43-dbaf6972bef1 is in state STARTED 2026-02-28 00:58:46.439296 | orchestrator | 2026-02-28 00:58:46 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:58:46.440937 | orchestrator | 2026-02-28 00:58:46 | INFO  | Task 6cfd6d28-095d-4fda-af77-9bb0780c82cb is in state STARTED 2026-02-28 00:58:46.440998 | orchestrator | 2026-02-28 00:58:46 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:58:49.495993 | orchestrator | 2026-02-28 00:58:49 | INFO  | Task e56ad6c2-cc17-4c48-8b43-dbaf6972bef1 is in state STARTED 2026-02-28 00:58:49.496355 | orchestrator | 2026-02-28 00:58:49 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:58:49.499240 | orchestrator | 2026-02-28 00:58:49 | INFO  | Task 6cfd6d28-095d-4fda-af77-9bb0780c82cb is in state STARTED 2026-02-28 00:58:49.499817 | orchestrator | 2026-02-28 00:58:49 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:58:52.559512 | orchestrator | 2026-02-28 00:58:52 | INFO  | Task e56ad6c2-cc17-4c48-8b43-dbaf6972bef1 is in state STARTED 2026-02-28 00:58:52.560978 | orchestrator | 2026-02-28 00:58:52 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:58:52.564094 | orchestrator | 2026-02-28 00:58:52 | INFO  | Task 6cfd6d28-095d-4fda-af77-9bb0780c82cb is in state STARTED 2026-02-28 00:58:52.564484 | orchestrator | 2026-02-28 00:58:52 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:58:55.603883 | orchestrator | 2026-02-28 00:58:55 | INFO  | Task e56ad6c2-cc17-4c48-8b43-dbaf6972bef1 is in state STARTED 2026-02-28 00:58:55.605231 | orchestrator | 2026-02-28 00:58:55 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:58:55.606764 | orchestrator | 2026-02-28 00:58:55 | INFO  | Task 6cfd6d28-095d-4fda-af77-9bb0780c82cb is in state STARTED 2026-02-28 00:58:55.606974 | orchestrator | 2026-02-28 00:58:55 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:58:58.649131 | orchestrator | 2026-02-28 00:58:58 | INFO  | Task e56ad6c2-cc17-4c48-8b43-dbaf6972bef1 is in state STARTED 2026-02-28 00:58:58.650533 | orchestrator | 2026-02-28 00:58:58 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:58:58.652114 | orchestrator | 2026-02-28 00:58:58 | INFO  | Task 6cfd6d28-095d-4fda-af77-9bb0780c82cb is in state STARTED 2026-02-28 00:58:58.652207 | orchestrator | 2026-02-28 00:58:58 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:59:01.707157 | orchestrator | 2026-02-28 00:59:01 | INFO  | Task e56ad6c2-cc17-4c48-8b43-dbaf6972bef1 is in state STARTED 2026-02-28 00:59:01.708342 | orchestrator | 2026-02-28 00:59:01 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state STARTED 2026-02-28 00:59:01.711417 | orchestrator | 2026-02-28 00:59:01 | INFO  | Task 6cfd6d28-095d-4fda-af77-9bb0780c82cb is in state STARTED 2026-02-28 00:59:01.711791 | orchestrator | 2026-02-28 00:59:01 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:59:04.770192 | orchestrator | 2026-02-28 00:59:04 | INFO  | Task e56ad6c2-cc17-4c48-8b43-dbaf6972bef1 is in state STARTED 2026-02-28 00:59:04.778181 | orchestrator | 2026-02-28 00:59:04 | INFO  | Task 93e9c0dd-2091-41e5-bc9b-38282261eab1 is in state SUCCESS 2026-02-28 00:59:04.781110 | orchestrator | 2026-02-28 00:59:04.781166 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-02-28 00:59:04.781185 | orchestrator | 2.16.14 2026-02-28 00:59:04.781199 | orchestrator | 2026-02-28 00:59:04.781205 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2026-02-28 00:59:04.781212 | orchestrator | 2026-02-28 00:59:04.781218 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-28 00:59:04.781224 | orchestrator | Saturday 28 February 2026 00:47:08 +0000 (0:00:00.780) 0:00:00.780 ***** 2026-02-28 00:59:04.781230 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:59:04.781237 | orchestrator | 2026-02-28 00:59:04.781242 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-28 00:59:04.781247 | orchestrator | Saturday 28 February 2026 00:47:09 +0000 (0:00:01.180) 0:00:01.960 ***** 2026-02-28 00:59:04.781253 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:59:04.781258 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:59:04.781264 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:59:04.781269 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:59:04.781274 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:59:04.781279 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:59:04.781284 | orchestrator | 2026-02-28 00:59:04.781289 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-28 00:59:04.781342 | orchestrator | Saturday 28 February 2026 00:47:11 +0000 (0:00:01.611) 0:00:03.571 ***** 2026-02-28 00:59:04.781348 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:59:04.781353 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:59:04.781359 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:59:04.781364 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:59:04.781369 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:59:04.781374 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:59:04.781379 | orchestrator | 2026-02-28 00:59:04.781384 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-28 00:59:04.781390 | orchestrator | Saturday 28 February 2026 00:47:11 +0000 (0:00:00.839) 0:00:04.411 ***** 2026-02-28 00:59:04.781395 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:59:04.781400 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:59:04.781405 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:59:04.781410 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:59:04.781415 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:59:04.781420 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:59:04.781425 | orchestrator | 2026-02-28 00:59:04.781431 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-28 00:59:04.781436 | orchestrator | Saturday 28 February 2026 00:47:12 +0000 (0:00:00.915) 0:00:05.326 ***** 2026-02-28 00:59:04.781460 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:59:04.781465 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:59:04.781470 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:59:04.781476 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:59:04.781481 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:59:04.781486 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:59:04.781491 | orchestrator | 2026-02-28 00:59:04.781497 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-28 00:59:04.781503 | orchestrator | Saturday 28 February 2026 00:47:13 +0000 (0:00:00.593) 0:00:05.919 ***** 2026-02-28 00:59:04.781508 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:59:04.781513 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:59:04.781518 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:59:04.781523 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:59:04.781529 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:59:04.781534 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:59:04.781539 | orchestrator | 2026-02-28 00:59:04.781544 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-28 00:59:04.781550 | orchestrator | Saturday 28 February 2026 00:47:14 +0000 (0:00:00.774) 0:00:06.694 ***** 2026-02-28 00:59:04.781555 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:59:04.781560 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:59:04.781565 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:59:04.781570 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:59:04.781575 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:59:04.781580 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:59:04.781586 | orchestrator | 2026-02-28 00:59:04.781591 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-28 00:59:04.781596 | orchestrator | Saturday 28 February 2026 00:47:15 +0000 (0:00:00.969) 0:00:07.664 ***** 2026-02-28 00:59:04.781602 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:59:04.781608 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:59:04.781630 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:59:04.781636 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:59:04.781641 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:59:04.781646 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:59:04.781651 | orchestrator | 2026-02-28 00:59:04.781657 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-28 00:59:04.781662 | orchestrator | Saturday 28 February 2026 00:47:15 +0000 (0:00:00.786) 0:00:08.451 ***** 2026-02-28 00:59:04.781668 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:59:04.781673 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:59:04.781678 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:59:04.781683 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:59:04.781688 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:59:04.781704 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:59:04.781710 | orchestrator | 2026-02-28 00:59:04.781716 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-28 00:59:04.781722 | orchestrator | Saturday 28 February 2026 00:47:17 +0000 (0:00:01.122) 0:00:09.574 ***** 2026-02-28 00:59:04.781728 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-28 00:59:04.781734 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-28 00:59:04.781740 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-28 00:59:04.781746 | orchestrator | 2026-02-28 00:59:04.781752 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-28 00:59:04.781976 | orchestrator | Saturday 28 February 2026 00:47:17 +0000 (0:00:00.877) 0:00:10.451 ***** 2026-02-28 00:59:04.781983 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:59:04.781988 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:59:04.781993 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:59:04.782008 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:59:04.782048 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:59:04.782058 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:59:04.782076 | orchestrator | 2026-02-28 00:59:04.782084 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-28 00:59:04.782092 | orchestrator | Saturday 28 February 2026 00:47:19 +0000 (0:00:01.493) 0:00:11.945 ***** 2026-02-28 00:59:04.782100 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-28 00:59:04.782107 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-28 00:59:04.782115 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-28 00:59:04.782122 | orchestrator | 2026-02-28 00:59:04.782131 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-28 00:59:04.782138 | orchestrator | Saturday 28 February 2026 00:47:21 +0000 (0:00:02.551) 0:00:14.497 ***** 2026-02-28 00:59:04.782147 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-02-28 00:59:04.782155 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-02-28 00:59:04.782163 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-02-28 00:59:04.782172 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:59:04.782180 | orchestrator | 2026-02-28 00:59:04.782188 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-28 00:59:04.782196 | orchestrator | Saturday 28 February 2026 00:47:22 +0000 (0:00:00.771) 0:00:15.268 ***** 2026-02-28 00:59:04.782207 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-28 00:59:04.782217 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-28 00:59:04.782223 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-28 00:59:04.782229 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:59:04.782234 | orchestrator | 2026-02-28 00:59:04.782239 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-28 00:59:04.782244 | orchestrator | Saturday 28 February 2026 00:47:23 +0000 (0:00:00.980) 0:00:16.249 ***** 2026-02-28 00:59:04.782251 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-28 00:59:04.782260 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-28 00:59:04.782265 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-28 00:59:04.782270 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:59:04.782276 | orchestrator | 2026-02-28 00:59:04.782287 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-28 00:59:04.782298 | orchestrator | Saturday 28 February 2026 00:47:24 +0000 (0:00:00.779) 0:00:17.028 ***** 2026-02-28 00:59:04.782315 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-28 00:47:20.462858', 'end': '2026-02-28 00:47:20.535985', 'delta': '0:00:00.073127', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-28 00:59:04.782327 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-28 00:47:21.157231', 'end': '2026-02-28 00:47:21.239908', 'delta': '0:00:00.082677', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-28 00:59:04.782339 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-28 00:47:21.722837', 'end': '2026-02-28 00:47:21.818124', 'delta': '0:00:00.095287', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-28 00:59:04.782413 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:59:04.782422 | orchestrator | 2026-02-28 00:59:04.782431 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-28 00:59:04.782439 | orchestrator | Saturday 28 February 2026 00:47:24 +0000 (0:00:00.352) 0:00:17.381 ***** 2026-02-28 00:59:04.782447 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:59:04.782456 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:59:04.782464 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:59:04.782472 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:59:04.782480 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:59:04.782485 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:59:04.782490 | orchestrator | 2026-02-28 00:59:04.782496 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-28 00:59:04.782501 | orchestrator | Saturday 28 February 2026 00:47:27 +0000 (0:00:02.265) 0:00:19.646 ***** 2026-02-28 00:59:04.782506 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-28 00:59:04.782511 | orchestrator | 2026-02-28 00:59:04.782517 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-28 00:59:04.782522 | orchestrator | Saturday 28 February 2026 00:47:27 +0000 (0:00:00.824) 0:00:20.470 ***** 2026-02-28 00:59:04.782527 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:59:04.782532 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:59:04.782537 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:59:04.782542 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:59:04.782548 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:59:04.782560 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:59:04.782565 | orchestrator | 2026-02-28 00:59:04.782570 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-28 00:59:04.782576 | orchestrator | Saturday 28 February 2026 00:47:29 +0000 (0:00:01.921) 0:00:22.392 ***** 2026-02-28 00:59:04.782581 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:59:04.782586 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:59:04.782591 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:59:04.782596 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:59:04.782602 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:59:04.782607 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:59:04.782656 | orchestrator | 2026-02-28 00:59:04.782663 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-28 00:59:04.782669 | orchestrator | Saturday 28 February 2026 00:47:32 +0000 (0:00:02.230) 0:00:24.622 ***** 2026-02-28 00:59:04.782675 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:59:04.782681 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:59:04.782687 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:59:04.782693 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:59:04.782698 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:59:04.782710 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:59:04.782716 | orchestrator | 2026-02-28 00:59:04.782723 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-28 00:59:04.782729 | orchestrator | Saturday 28 February 2026 00:47:33 +0000 (0:00:01.252) 0:00:25.874 ***** 2026-02-28 00:59:04.782735 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:59:04.782740 | orchestrator | 2026-02-28 00:59:04.782747 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-28 00:59:04.782753 | orchestrator | Saturday 28 February 2026 00:47:33 +0000 (0:00:00.143) 0:00:26.018 ***** 2026-02-28 00:59:04.782759 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:59:04.782764 | orchestrator | 2026-02-28 00:59:04.782770 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-28 00:59:04.782776 | orchestrator | Saturday 28 February 2026 00:47:34 +0000 (0:00:00.616) 0:00:26.634 ***** 2026-02-28 00:59:04.782783 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:59:04.782789 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:59:04.782795 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:59:04.782808 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:59:04.782814 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:59:04.782820 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:59:04.782826 | orchestrator | 2026-02-28 00:59:04.782832 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-28 00:59:04.782838 | orchestrator | Saturday 28 February 2026 00:47:35 +0000 (0:00:01.703) 0:00:28.337 ***** 2026-02-28 00:59:04.782844 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:59:04.782850 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:59:04.782856 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:59:04.782862 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:59:04.782868 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:59:04.782874 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:59:04.782880 | orchestrator | 2026-02-28 00:59:04.782886 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-28 00:59:04.782892 | orchestrator | Saturday 28 February 2026 00:47:37 +0000 (0:00:02.022) 0:00:30.360 ***** 2026-02-28 00:59:04.782898 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:59:04.782904 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:59:04.782910 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:59:04.782916 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:59:04.782922 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:59:04.782928 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:59:04.782934 | orchestrator | 2026-02-28 00:59:04.782940 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-28 00:59:04.783194 | orchestrator | Saturday 28 February 2026 00:47:38 +0000 (0:00:00.693) 0:00:31.053 ***** 2026-02-28 00:59:04.783210 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:59:04.783225 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:59:04.783233 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:59:04.783241 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:59:04.783248 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:59:04.783256 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:59:04.783264 | orchestrator | 2026-02-28 00:59:04.783273 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-28 00:59:04.783281 | orchestrator | Saturday 28 February 2026 00:47:41 +0000 (0:00:03.015) 0:00:34.069 ***** 2026-02-28 00:59:04.783286 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:59:04.783291 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:59:04.783296 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:59:04.783301 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:59:04.783305 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:59:04.783310 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:59:04.783315 | orchestrator | 2026-02-28 00:59:04.783320 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-28 00:59:04.783325 | orchestrator | Saturday 28 February 2026 00:47:43 +0000 (0:00:01.449) 0:00:35.518 ***** 2026-02-28 00:59:04.783330 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:59:04.783335 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:59:04.783340 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:59:04.783345 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:59:04.783350 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:59:04.783358 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:59:04.783366 | orchestrator | 2026-02-28 00:59:04.783373 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-28 00:59:04.783381 | orchestrator | Saturday 28 February 2026 00:47:45 +0000 (0:00:02.720) 0:00:38.238 ***** 2026-02-28 00:59:04.783389 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:59:04.783396 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:59:04.783403 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:59:04.783411 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:59:04.783462 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:59:04.783471 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:59:04.783479 | orchestrator | 2026-02-28 00:59:04.783487 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-28 00:59:04.783496 | orchestrator | Saturday 28 February 2026 00:47:46 +0000 (0:00:00.865) 0:00:39.104 ***** 2026-02-28 00:59:04.783507 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--4d18609e--ecdb--578d--a05b--e7913934f080-osd--block--4d18609e--ecdb--578d--a05b--e7913934f080', 'dm-uuid-LVM-qgrAIOwSnkhw1QWxPjfQy0LnHVk74kox537minsro3qYF1q9x33m0dfTKleDoHvM'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-28 00:59:04.783524 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--dcf33d59--3ae6--5017--b2aa--1b02884ceea7-osd--block--dcf33d59--3ae6--5017--b2aa--1b02884ceea7', 'dm-uuid-LVM-mQcW5Fd3FgXWizSYHN01zaatnwPy7HyWH3DTpYJvJFi2eq4JqpT9LIOS6UR4q7nc'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-28 00:59:04.783549 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 00:59:04.783563 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 00:59:04.783569 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 00:59:04.783574 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 00:59:04.783579 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 00:59:04.783584 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 00:59:04.783589 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--73c4f4bf--6139--5634--9e57--de597eca9964-osd--block--73c4f4bf--6139--5634--9e57--de597eca9964', 'dm-uuid-LVM-WhiGASCrF3mL39HD4JICU92YzJF5yiKVE1Spqe9clI97Bg7oeard2ZXeo9zpd8oz'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-28 00:59:04.783598 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--17f6d453--f54a--57d2--bd55--b12b469b0db8-osd--block--17f6d453--f54a--57d2--bd55--b12b469b0db8', 'dm-uuid-LVM-OPiv2ckmCK2izFfGxciwOHhEGZyxB9cZupaOmhobB5kxnZKwRpRj2hWGH7kjGlBy'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-28 00:59:04.783603 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 00:59:04.783670 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 00:59:04.783678 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 00:59:04.783683 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 00:59:04.783690 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_84e1ce59-bd95-40da-9f03-5819b7d1b103', 'scsi-SQEMU_QEMU_HARDDISK_84e1ce59-bd95-40da-9f03-5819b7d1b103'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_84e1ce59-bd95-40da-9f03-5819b7d1b103-part1', 'scsi-SQEMU_QEMU_HARDDISK_84e1ce59-bd95-40da-9f03-5819b7d1b103-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_84e1ce59-bd95-40da-9f03-5819b7d1b103-part14', 'scsi-SQEMU_QEMU_HARDDISK_84e1ce59-bd95-40da-9f03-5819b7d1b103-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_84e1ce59-bd95-40da-9f03-5819b7d1b103-part15', 'scsi-SQEMU_QEMU_HARDDISK_84e1ce59-bd95-40da-9f03-5819b7d1b103-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_84e1ce59-bd95-40da-9f03-5819b7d1b103-part16', 'scsi-SQEMU_QEMU_HARDDISK_84e1ce59-bd95-40da-9f03-5819b7d1b103-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-28 00:59:04.783920 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 00:59:04.783941 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 00:59:04.783963 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--4d18609e--ecdb--578d--a05b--e7913934f080-osd--block--4d18609e--ecdb--578d--a05b--e7913934f080'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-EpLw32-chyL-yRPv-Nd3g-kw4H-Ai5L-TRM6a3', 'scsi-0QEMU_QEMU_HARDDISK_9a1bcb93-f154-4a17-8f9d-a00d049f4cc1', 'scsi-SQEMU_QEMU_HARDDISK_9a1bcb93-f154-4a17-8f9d-a00d049f4cc1'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-28 00:59:04.783971 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 00:59:04.783978 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 00:59:04.783984 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--dcf33d59--3ae6--5017--b2aa--1b02884ceea7-osd--block--dcf33d59--3ae6--5017--b2aa--1b02884ceea7'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-MkHVfU-Fytw-UqRH-fjRb-Cqoc-lqqg-qkduQV', 'scsi-0QEMU_QEMU_HARDDISK_16fcc6e7-951a-43ed-8f3a-017ae19ace76', 'scsi-SQEMU_QEMU_HARDDISK_16fcc6e7-951a-43ed-8f3a-017ae19ace76'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-28 00:59:04.783990 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 00:59:04.783996 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 00:59:04.784005 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_afb4b4ce-eec7-46b2-91b5-87577cac503b', 'scsi-SQEMU_QEMU_HARDDISK_afb4b4ce-eec7-46b2-91b5-87577cac503b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-28 00:59:04.784031 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_85345139-bc47-4fee-b6f9-5fb160253b97', 'scsi-SQEMU_QEMU_HARDDISK_85345139-bc47-4fee-b6f9-5fb160253b97'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_85345139-bc47-4fee-b6f9-5fb160253b97-part1', 'scsi-SQEMU_QEMU_HARDDISK_85345139-bc47-4fee-b6f9-5fb160253b97-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_85345139-bc47-4fee-b6f9-5fb160253b97-part14', 'scsi-SQEMU_QEMU_HARDDISK_85345139-bc47-4fee-b6f9-5fb160253b97-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_85345139-bc47-4fee-b6f9-5fb160253b97-part15', 'scsi-SQEMU_QEMU_HARDDISK_85345139-bc47-4fee-b6f9-5fb160253b97-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_85345139-bc47-4fee-b6f9-5fb160253b97-part16', 'scsi-SQEMU_QEMU_HARDDISK_85345139-bc47-4fee-b6f9-5fb160253b97-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-28 00:59:04.784039 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--73c4f4bf--6139--5634--9e57--de597eca9964-osd--block--73c4f4bf--6139--5634--9e57--de597eca9964'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-eJbIch-EZat-lfeB-Foxv-JJgp-aTG0-8uTQ1P', 'scsi-0QEMU_QEMU_HARDDISK_2388cee9-22a9-4416-93b3-e236454bc031', 'scsi-SQEMU_QEMU_HARDDISK_2388cee9-22a9-4416-93b3-e236454bc031'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-28 00:59:04.784046 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-28-00-02-50-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-28 00:59:04.784055 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--17f6d453--f54a--57d2--bd55--b12b469b0db8-osd--block--17f6d453--f54a--57d2--bd55--b12b469b0db8'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-1rmCQt-VoW0-sOI6-C15c-CqIq-V4tx-5iix1t', 'scsi-0QEMU_QEMU_HARDDISK_fa3b351c-e54b-439c-bac1-d7e08e27df4b', 'scsi-SQEMU_QEMU_HARDDISK_fa3b351c-e54b-439c-bac1-d7e08e27df4b'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-28 00:59:04.784078 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_151bff65-91b8-4b11-a525-96a3d98709b9', 'scsi-SQEMU_QEMU_HARDDISK_151bff65-91b8-4b11-a525-96a3d98709b9'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-28 00:59:04.784085 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-28-00-02-41-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-28 00:59:04.784091 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--04fa4cbf--2eb3--5c27--a3dd--f7c2dcd9ac18-osd--block--04fa4cbf--2eb3--5c27--a3dd--f7c2dcd9ac18', 'dm-uuid-LVM-XP4sp69lMwdqwWlMXCLx6v67l4rUhVpWhDvQF6SXe0SHtVySxBy3H9UMbto1Dw5v'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-28 00:59:04.784097 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:59:04.784104 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--f45f70cf--4b1a--5b52--bc0a--6a4d28c0a539-osd--block--f45f70cf--4b1a--5b52--bc0a--6a4d28c0a539', 'dm-uuid-LVM-Bt9ZLP0VROEB0wZ4ICpmM7zG2lv1hlPV5pcpsdqHuzg867lux994SUQj9QOymTAs'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-28 00:59:04.784110 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 00:59:04.784116 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 00:59:04.784129 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 00:59:04.784167 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 00:59:04.784188 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 00:59:04.784194 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 00:59:04.784199 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 00:59:04.784204 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 00:59:04.784209 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 00:59:04.784214 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 00:59:04.784219 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 00:59:04.784232 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 00:59:04.784254 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ea0db2d4-7821-45ce-aa4b-0ff26e9cf878', 'scsi-SQEMU_QEMU_HARDDISK_ea0db2d4-7821-45ce-aa4b-0ff26e9cf878'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ea0db2d4-7821-45ce-aa4b-0ff26e9cf878-part1', 'scsi-SQEMU_QEMU_HARDDISK_ea0db2d4-7821-45ce-aa4b-0ff26e9cf878-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ea0db2d4-7821-45ce-aa4b-0ff26e9cf878-part14', 'scsi-SQEMU_QEMU_HARDDISK_ea0db2d4-7821-45ce-aa4b-0ff26e9cf878-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ea0db2d4-7821-45ce-aa4b-0ff26e9cf878-part15', 'scsi-SQEMU_QEMU_HARDDISK_ea0db2d4-7821-45ce-aa4b-0ff26e9cf878-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ea0db2d4-7821-45ce-aa4b-0ff26e9cf878-part16', 'scsi-SQEMU_QEMU_HARDDISK_ea0db2d4-7821-45ce-aa4b-0ff26e9cf878-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-28 00:59:04.784260 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 00:59:04.784266 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 00:59:04.784271 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--04fa4cbf--2eb3--5c27--a3dd--f7c2dcd9ac18-osd--block--04fa4cbf--2eb3--5c27--a3dd--f7c2dcd9ac18'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-uKKsbr-x1Kk-0mgN-OpmP-VMSJ-h0lC-XG4wxU', 'scsi-0QEMU_QEMU_HARDDISK_96cb3389-09b8-4702-8328-a447a406a3bc', 'scsi-SQEMU_QEMU_HARDDISK_96cb3389-09b8-4702-8328-a447a406a3bc'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-28 00:59:04.784281 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 00:59:04.784289 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:59:04.784294 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 00:59:04.784311 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--f45f70cf--4b1a--5b52--bc0a--6a4d28c0a539-osd--block--f45f70cf--4b1a--5b52--bc0a--6a4d28c0a539'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-rSfWzk-1CMq-fbaa-7rVi-ULYC-o1bD-yp5IFn', 'scsi-0QEMU_QEMU_HARDDISK_810ccfde-b37c-4538-b69c-a55db736621a', 'scsi-SQEMU_QEMU_HARDDISK_810ccfde-b37c-4538-b69c-a55db736621a'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-28 00:59:04.784317 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1d43ef31-86cc-4f7d-aec6-7bed74b0054d', 'scsi-SQEMU_QEMU_HARDDISK_1d43ef31-86cc-4f7d-aec6-7bed74b0054d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1d43ef31-86cc-4f7d-aec6-7bed74b0054d-part1', 'scsi-SQEMU_QEMU_HARDDISK_1d43ef31-86cc-4f7d-aec6-7bed74b0054d-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1d43ef31-86cc-4f7d-aec6-7bed74b0054d-part14', 'scsi-SQEMU_QEMU_HARDDISK_1d43ef31-86cc-4f7d-aec6-7bed74b0054d-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1d43ef31-86cc-4f7d-aec6-7bed74b0054d-part15', 'scsi-SQEMU_QEMU_HARDDISK_1d43ef31-86cc-4f7d-aec6-7bed74b0054d-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1d43ef31-86cc-4f7d-aec6-7bed74b0054d-part16', 'scsi-SQEMU_QEMU_HARDDISK_1d43ef31-86cc-4f7d-aec6-7bed74b0054d-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-28 00:59:04.784327 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-28-00-02-43-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-28 00:59:04.784336 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_aebfdaae-19e6-4277-9533-aca5f477cfa9', 'scsi-SQEMU_QEMU_HARDDISK_aebfdaae-19e6-4277-9533-aca5f477cfa9'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-28 00:59:04.784354 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-28-00-02-45-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-28 00:59:04.784360 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 00:59:04.784368 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 00:59:04.784376 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 00:59:04.784384 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 00:59:04.784392 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 00:59:04.784405 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 00:59:04.784414 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 00:59:04.784426 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 00:59:04.784453 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_370cf4d2-63bd-48d2-9d3a-0a18fe924203', 'scsi-SQEMU_QEMU_HARDDISK_370cf4d2-63bd-48d2-9d3a-0a18fe924203'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_370cf4d2-63bd-48d2-9d3a-0a18fe924203-part1', 'scsi-SQEMU_QEMU_HARDDISK_370cf4d2-63bd-48d2-9d3a-0a18fe924203-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_370cf4d2-63bd-48d2-9d3a-0a18fe924203-part14', 'scsi-SQEMU_QEMU_HARDDISK_370cf4d2-63bd-48d2-9d3a-0a18fe924203-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_370cf4d2-63bd-48d2-9d3a-0a18fe924203-part15', 'scsi-SQEMU_QEMU_HARDDISK_370cf4d2-63bd-48d2-9d3a-0a18fe924203-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_370cf4d2-63bd-48d2-9d3a-0a18fe924203-part16', 'scsi-SQEMU_QEMU_HARDDISK_370cf4d2-63bd-48d2-9d3a-0a18fe924203-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-28 00:59:04.784702 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-28-00-02-47-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-28 00:59:04.784736 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:59:04.784744 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:59:04.784752 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:59:04.784761 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 00:59:04.784770 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 00:59:04.784784 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 00:59:04.784793 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 00:59:04.784829 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 00:59:04.784836 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 00:59:04.784841 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 00:59:04.784846 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 00:59:04.784856 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ba97938d-26f2-4bf0-9eef-5f523c574980', 'scsi-SQEMU_QEMU_HARDDISK_ba97938d-26f2-4bf0-9eef-5f523c574980'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ba97938d-26f2-4bf0-9eef-5f523c574980-part1', 'scsi-SQEMU_QEMU_HARDDISK_ba97938d-26f2-4bf0-9eef-5f523c574980-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ba97938d-26f2-4bf0-9eef-5f523c574980-part14', 'scsi-SQEMU_QEMU_HARDDISK_ba97938d-26f2-4bf0-9eef-5f523c574980-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ba97938d-26f2-4bf0-9eef-5f523c574980-part15', 'scsi-SQEMU_QEMU_HARDDISK_ba97938d-26f2-4bf0-9eef-5f523c574980-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ba97938d-26f2-4bf0-9eef-5f523c574980-part16', 'scsi-SQEMU_QEMU_HARDDISK_ba97938d-26f2-4bf0-9eef-5f523c574980-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-28 00:59:04.784884 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-28-00-02-49-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-28 00:59:04.784890 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:59:04.784895 | orchestrator | 2026-02-28 00:59:04.784901 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-28 00:59:04.784906 | orchestrator | Saturday 28 February 2026 00:47:48 +0000 (0:00:02.218) 0:00:41.323 ***** 2026-02-28 00:59:04.784913 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--4d18609e--ecdb--578d--a05b--e7913934f080-osd--block--4d18609e--ecdb--578d--a05b--e7913934f080', 'dm-uuid-LVM-qgrAIOwSnkhw1QWxPjfQy0LnHVk74kox537minsro3qYF1q9x33m0dfTKleDoHvM'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:59:04.784919 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--dcf33d59--3ae6--5017--b2aa--1b02884ceea7-osd--block--dcf33d59--3ae6--5017--b2aa--1b02884ceea7', 'dm-uuid-LVM-mQcW5Fd3FgXWizSYHN01zaatnwPy7HyWH3DTpYJvJFi2eq4JqpT9LIOS6UR4q7nc'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:59:04.784929 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:59:04.784935 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:59:04.784940 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:59:04.784959 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:59:04.784964 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:59:04.784969 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:59:04.784978 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:59:04.784983 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:59:04.785026 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_84e1ce59-bd95-40da-9f03-5819b7d1b103', 'scsi-SQEMU_QEMU_HARDDISK_84e1ce59-bd95-40da-9f03-5819b7d1b103'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_84e1ce59-bd95-40da-9f03-5819b7d1b103-part1', 'scsi-SQEMU_QEMU_HARDDISK_84e1ce59-bd95-40da-9f03-5819b7d1b103-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_84e1ce59-bd95-40da-9f03-5819b7d1b103-part14', 'scsi-SQEMU_QEMU_HARDDISK_84e1ce59-bd95-40da-9f03-5819b7d1b103-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_84e1ce59-bd95-40da-9f03-5819b7d1b103-part15', 'scsi-SQEMU_QEMU_HARDDISK_84e1ce59-bd95-40da-9f03-5819b7d1b103-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_84e1ce59-bd95-40da-9f03-5819b7d1b103-part16', 'scsi-SQEMU_QEMU_HARDDISK_84e1ce59-bd95-40da-9f03-5819b7d1b103-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:59:04.785033 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--4d18609e--ecdb--578d--a05b--e7913934f080-osd--block--4d18609e--ecdb--578d--a05b--e7913934f080'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-EpLw32-chyL-yRPv-Nd3g-kw4H-Ai5L-TRM6a3', 'scsi-0QEMU_QEMU_HARDDISK_9a1bcb93-f154-4a17-8f9d-a00d049f4cc1', 'scsi-SQEMU_QEMU_HARDDISK_9a1bcb93-f154-4a17-8f9d-a00d049f4cc1'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:59:04.785043 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--dcf33d59--3ae6--5017--b2aa--1b02884ceea7-osd--block--dcf33d59--3ae6--5017--b2aa--1b02884ceea7'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-MkHVfU-Fytw-UqRH-fjRb-Cqoc-lqqg-qkduQV', 'scsi-0QEMU_QEMU_HARDDISK_16fcc6e7-951a-43ed-8f3a-017ae19ace76', 'scsi-SQEMU_QEMU_HARDDISK_16fcc6e7-951a-43ed-8f3a-017ae19ace76'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:59:04.785052 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_afb4b4ce-eec7-46b2-91b5-87577cac503b', 'scsi-SQEMU_QEMU_HARDDISK_afb4b4ce-eec7-46b2-91b5-87577cac503b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:59:04.785069 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-28-00-02-50-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:59:04.785075 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--73c4f4bf--6139--5634--9e57--de597eca9964-osd--block--73c4f4bf--6139--5634--9e57--de597eca9964', 'dm-uuid-LVM-WhiGASCrF3mL39HD4JICU92YzJF5yiKVE1Spqe9clI97Bg7oeard2ZXeo9zpd8oz'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:59:04.785084 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--17f6d453--f54a--57d2--bd55--b12b469b0db8-osd--block--17f6d453--f54a--57d2--bd55--b12b469b0db8', 'dm-uuid-LVM-OPiv2ckmCK2izFfGxciwOHhEGZyxB9cZupaOmhobB5kxnZKwRpRj2hWGH7kjGlBy'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:59:04.785089 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:59:04.785095 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:59:04.785103 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:59:04.785120 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:59:04.785126 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:59:04.785132 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:59:04.785141 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:59:04.785178 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:59:04.785187 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:59:04.785214 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_85345139-bc47-4fee-b6f9-5fb160253b97', 'scsi-SQEMU_QEMU_HARDDISK_85345139-bc47-4fee-b6f9-5fb160253b97'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_85345139-bc47-4fee-b6f9-5fb160253b97-part1', 'scsi-SQEMU_QEMU_HARDDISK_85345139-bc47-4fee-b6f9-5fb160253b97-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_85345139-bc47-4fee-b6f9-5fb160253b97-part14', 'scsi-SQEMU_QEMU_HARDDISK_85345139-bc47-4fee-b6f9-5fb160253b97-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_85345139-bc47-4fee-b6f9-5fb160253b97-part15', 'scsi-SQEMU_QEMU_HARDDISK_85345139-bc47-4fee-b6f9-5fb160253b97-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_85345139-bc47-4fee-b6f9-5fb160253b97-part16', 'scsi-SQEMU_QEMU_HARDDISK_85345139-bc47-4fee-b6f9-5fb160253b97-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:59:04.785228 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--73c4f4bf--6139--5634--9e57--de597eca9964-osd--block--73c4f4bf--6139--5634--9e57--de597eca9964'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-eJbIch-EZat-lfeB-Foxv-JJgp-aTG0-8uTQ1P', 'scsi-0QEMU_QEMU_HARDDISK_2388cee9-22a9-4416-93b3-e236454bc031', 'scsi-SQEMU_QEMU_HARDDISK_2388cee9-22a9-4416-93b3-e236454bc031'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:59:04.785238 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--17f6d453--f54a--57d2--bd55--b12b469b0db8-osd--block--17f6d453--f54a--57d2--bd55--b12b469b0db8'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-1rmCQt-VoW0-sOI6-C15c-CqIq-V4tx-5iix1t', 'scsi-0QEMU_QEMU_HARDDISK_fa3b351c-e54b-439c-bac1-d7e08e27df4b', 'scsi-SQEMU_QEMU_HARDDISK_fa3b351c-e54b-439c-bac1-d7e08e27df4b'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:59:04.785270 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_151bff65-91b8-4b11-a525-96a3d98709b9', 'scsi-SQEMU_QEMU_HARDDISK_151bff65-91b8-4b11-a525-96a3d98709b9'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:59:04.785297 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-28-00-02-41-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:59:04.785306 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:59:04.785314 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--04fa4cbf--2eb3--5c27--a3dd--f7c2dcd9ac18-osd--block--04fa4cbf--2eb3--5c27--a3dd--f7c2dcd9ac18', 'dm-uuid-LVM-XP4sp69lMwdqwWlMXCLx6v67l4rUhVpWhDvQF6SXe0SHtVySxBy3H9UMbto1Dw5v'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:59:04.785327 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:59:04.785336 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:59:04.785345 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:59:04.785357 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:59:04.785382 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:59:04.785390 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:59:04.785406 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:59:04.785415 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:59:04.785445 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1d43ef31-86cc-4f7d-aec6-7bed74b0054d', 'scsi-SQEMU_QEMU_HARDDISK_1d43ef31-86cc-4f7d-aec6-7bed74b0054d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1d43ef31-86cc-4f7d-aec6-7bed74b0054d-part1', 'scsi-SQEMU_QEMU_HARDDISK_1d43ef31-86cc-4f7d-aec6-7bed74b0054d-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1d43ef31-86cc-4f7d-aec6-7bed74b0054d-part14', 'scsi-SQEMU_QEMU_HARDDISK_1d43ef31-86cc-4f7d-aec6-7bed74b0054d-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1d43ef31-86cc-4f7d-aec6-7bed74b0054d-part15', 'scsi-SQEMU_QEMU_HARDDISK_1d43ef31-86cc-4f7d-aec6-7bed74b0054d-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1d43ef31-86cc-4f7d-aec6-7bed74b0054d-part16', 'scsi-SQEMU_QEMU_HARDDISK_1d43ef31-86cc-4f7d-aec6-7bed74b0054d-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:59:04.785457 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-28-00-02-43-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:59:04.785471 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--f45f70cf--4b1a--5b52--bc0a--6a4d28c0a539-osd--block--f45f70cf--4b1a--5b52--bc0a--6a4d28c0a539', 'dm-uuid-LVM-Bt9ZLP0VROEB0wZ4ICpmM7zG2lv1hlPV5pcpsdqHuzg867lux994SUQj9QOymTAs'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:59:04.785482 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:59:04.785490 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:59:04.785825 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:59:04.785904 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:59:04.785920 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:59:04.785926 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:59:04.785931 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:59:04.785936 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:59:04.785941 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:59:04.785968 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ea0db2d4-7821-45ce-aa4b-0ff26e9cf878', 'scsi-SQEMU_QEMU_HARDDISK_ea0db2d4-7821-45ce-aa4b-0ff26e9cf878'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ea0db2d4-7821-45ce-aa4b-0ff26e9cf878-part1', 'scsi-SQEMU_QEMU_HARDDISK_ea0db2d4-7821-45ce-aa4b-0ff26e9cf878-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ea0db2d4-7821-45ce-aa4b-0ff26e9cf878-part14', 'scsi-SQEMU_QEMU_HARDDISK_ea0db2d4-7821-45ce-aa4b-0ff26e9cf878-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ea0db2d4-7821-45ce-aa4b-0ff26e9cf878-part15', 'scsi-SQEMU_QEMU_HARDDISK_ea0db2d4-7821-45ce-aa4b-0ff26e9cf878-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ea0db2d4-7821-45ce-aa4b-0ff26e9cf878-part16', 'scsi-SQEMU_QEMU_HARDDISK_ea0db2d4-7821-45ce-aa4b-0ff26e9cf878-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:59:04.785980 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--04fa4cbf--2eb3--5c27--a3dd--f7c2dcd9ac18-osd--block--04fa4cbf--2eb3--5c27--a3dd--f7c2dcd9ac18'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-uKKsbr-x1Kk-0mgN-OpmP-VMSJ-h0lC-XG4wxU', 'scsi-0QEMU_QEMU_HARDDISK_96cb3389-09b8-4702-8328-a447a406a3bc', 'scsi-SQEMU_QEMU_HARDDISK_96cb3389-09b8-4702-8328-a447a406a3bc'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:59:04.785986 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--f45f70cf--4b1a--5b52--bc0a--6a4d28c0a539-osd--block--f45f70cf--4b1a--5b52--bc0a--6a4d28c0a539'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-rSfWzk-1CMq-fbaa-7rVi-ULYC-o1bD-yp5IFn', 'scsi-0QEMU_QEMU_HARDDISK_810ccfde-b37c-4538-b69c-a55db736621a', 'scsi-SQEMU_QEMU_HARDDISK_810ccfde-b37c-4538-b69c-a55db736621a'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:59:04.785992 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:59:04.786000 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:59:04.786066 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_aebfdaae-19e6-4277-9533-aca5f477cfa9', 'scsi-SQEMU_QEMU_HARDDISK_aebfdaae-19e6-4277-9533-aca5f477cfa9'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:59:04.786075 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:59:04.786080 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:59:04.786086 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-28-00-02-45-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:59:04.786092 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:59:04.786100 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:59:04.786124 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:59:04.786130 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:59:04.786136 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_370cf4d2-63bd-48d2-9d3a-0a18fe924203', 'scsi-SQEMU_QEMU_HARDDISK_370cf4d2-63bd-48d2-9d3a-0a18fe924203'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_370cf4d2-63bd-48d2-9d3a-0a18fe924203-part1', 'scsi-SQEMU_QEMU_HARDDISK_370cf4d2-63bd-48d2-9d3a-0a18fe924203-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_370cf4d2-63bd-48d2-9d3a-0a18fe924203-part14', 'scsi-SQEMU_QEMU_HARDDISK_370cf4d2-63bd-48d2-9d3a-0a18fe924203-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_370cf4d2-63bd-48d2-9d3a-0a18fe924203-part15', 'scsi-SQEMU_QEMU_HARDDISK_370cf4d2-63bd-48d2-9d3a-0a18fe924203-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_370cf4d2-63bd-48d2-9d3a-0a18fe924203-part16', 'scsi-SQEMU_QEMU_HARDDISK_370cf4d2-63bd-48d2-9d3a-0a18fe924203-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:59:04.786158 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-28-00-02-47-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:59:04.786171 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:59:04.786176 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:59:04.786181 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:59:04.786186 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:59:04.786191 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:59:04.786196 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:59:04.786201 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:59:04.786209 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:59:04.786232 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:59:04.786238 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:59:04.786243 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ba97938d-26f2-4bf0-9eef-5f523c574980', 'scsi-SQEMU_QEMU_HARDDISK_ba97938d-26f2-4bf0-9eef-5f523c574980'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ba97938d-26f2-4bf0-9eef-5f523c574980-part1', 'scsi-SQEMU_QEMU_HARDDISK_ba97938d-26f2-4bf0-9eef-5f523c574980-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ba97938d-26f2-4bf0-9eef-5f523c574980-part14', 'scsi-SQEMU_QEMU_HARDDISK_ba97938d-26f2-4bf0-9eef-5f523c574980-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ba97938d-26f2-4bf0-9eef-5f523c574980-part15', 'scsi-SQEMU_QEMU_HARDDISK_ba97938d-26f2-4bf0-9eef-5f523c574980-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ba97938d-26f2-4bf0-9eef-5f523c574980-part16', 'scsi-SQEMU_QEMU_HARDDISK_ba97938d-26f2-4bf0-9eef-5f523c574980-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:59:04.786252 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-28-00-02-49-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:59:04.786261 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:59:04.786266 | orchestrator | 2026-02-28 00:59:04.786284 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-28 00:59:04.786291 | orchestrator | Saturday 28 February 2026 00:47:51 +0000 (0:00:02.941) 0:00:44.264 ***** 2026-02-28 00:59:04.786296 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:59:04.786301 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:59:04.786306 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:59:04.786311 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:59:04.786315 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:59:04.786320 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:59:04.786325 | orchestrator | 2026-02-28 00:59:04.786330 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-28 00:59:04.786335 | orchestrator | Saturday 28 February 2026 00:47:53 +0000 (0:00:02.154) 0:00:46.419 ***** 2026-02-28 00:59:04.786340 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:59:04.786345 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:59:04.786349 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:59:04.786354 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:59:04.786359 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:59:04.786364 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:59:04.786368 | orchestrator | 2026-02-28 00:59:04.786373 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-28 00:59:04.786378 | orchestrator | Saturday 28 February 2026 00:47:55 +0000 (0:00:01.314) 0:00:47.734 ***** 2026-02-28 00:59:04.786469 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:59:04.786476 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:59:04.786482 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:59:04.786487 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:59:04.786492 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:59:04.786497 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:59:04.786503 | orchestrator | 2026-02-28 00:59:04.786508 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-28 00:59:04.786513 | orchestrator | Saturday 28 February 2026 00:47:56 +0000 (0:00:01.738) 0:00:49.472 ***** 2026-02-28 00:59:04.786519 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:59:04.786525 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:59:04.786531 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:59:04.786537 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:59:04.786543 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:59:04.786549 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:59:04.786555 | orchestrator | 2026-02-28 00:59:04.786561 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-28 00:59:04.786567 | orchestrator | Saturday 28 February 2026 00:47:58 +0000 (0:00:01.539) 0:00:51.011 ***** 2026-02-28 00:59:04.786572 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:59:04.786578 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:59:04.786584 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:59:04.786591 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:59:04.786596 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:59:04.786601 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:59:04.786606 | orchestrator | 2026-02-28 00:59:04.786631 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-28 00:59:04.786638 | orchestrator | Saturday 28 February 2026 00:48:00 +0000 (0:00:01.723) 0:00:52.735 ***** 2026-02-28 00:59:04.786650 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:59:04.786654 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:59:04.786659 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:59:04.786664 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:59:04.786669 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:59:04.786674 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:59:04.786679 | orchestrator | 2026-02-28 00:59:04.786684 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-28 00:59:04.786688 | orchestrator | Saturday 28 February 2026 00:48:01 +0000 (0:00:01.595) 0:00:54.331 ***** 2026-02-28 00:59:04.786693 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-02-28 00:59:04.786699 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-02-28 00:59:04.786704 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-02-28 00:59:04.786709 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-02-28 00:59:04.786714 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-02-28 00:59:04.786718 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2026-02-28 00:59:04.786723 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-02-28 00:59:04.786728 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-02-28 00:59:04.786733 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-28 00:59:04.786738 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-02-28 00:59:04.786743 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-02-28 00:59:04.786748 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-02-28 00:59:04.786753 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2026-02-28 00:59:04.786757 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2026-02-28 00:59:04.786762 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-02-28 00:59:04.786767 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2026-02-28 00:59:04.786772 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-02-28 00:59:04.786781 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-02-28 00:59:04.786786 | orchestrator | 2026-02-28 00:59:04.786791 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-28 00:59:04.786796 | orchestrator | Saturday 28 February 2026 00:48:07 +0000 (0:00:05.589) 0:00:59.920 ***** 2026-02-28 00:59:04.786801 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-02-28 00:59:04.786806 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-02-28 00:59:04.786811 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-02-28 00:59:04.786815 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:59:04.786820 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-02-28 00:59:04.786825 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-02-28 00:59:04.786830 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-02-28 00:59:04.786835 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:59:04.786839 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-02-28 00:59:04.786864 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-02-28 00:59:04.786869 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-02-28 00:59:04.786874 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-28 00:59:04.786879 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-28 00:59:04.786884 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-28 00:59:04.786889 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:59:04.786897 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:59:04.786904 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-02-28 00:59:04.786911 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-02-28 00:59:04.786921 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-02-28 00:59:04.786926 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-02-28 00:59:04.786940 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:59:04.786945 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-02-28 00:59:04.786950 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-02-28 00:59:04.786955 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:59:04.786960 | orchestrator | 2026-02-28 00:59:04.786965 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-28 00:59:04.786969 | orchestrator | Saturday 28 February 2026 00:48:08 +0000 (0:00:01.141) 0:01:01.062 ***** 2026-02-28 00:59:04.786974 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:59:04.786979 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:59:04.786984 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:59:04.786989 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-4, testbed-node-3, testbed-node-5 2026-02-28 00:59:04.786994 | orchestrator | 2026-02-28 00:59:04.786999 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-28 00:59:04.787005 | orchestrator | Saturday 28 February 2026 00:48:10 +0000 (0:00:01.714) 0:01:02.776 ***** 2026-02-28 00:59:04.787010 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:59:04.787015 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:59:04.787020 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:59:04.787024 | orchestrator | 2026-02-28 00:59:04.787029 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-28 00:59:04.787034 | orchestrator | Saturday 28 February 2026 00:48:10 +0000 (0:00:00.632) 0:01:03.408 ***** 2026-02-28 00:59:04.787039 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:59:04.787044 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:59:04.787049 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:59:04.787054 | orchestrator | 2026-02-28 00:59:04.787058 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-28 00:59:04.787063 | orchestrator | Saturday 28 February 2026 00:48:11 +0000 (0:00:00.407) 0:01:03.816 ***** 2026-02-28 00:59:04.787068 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:59:04.787073 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:59:04.787078 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:59:04.787083 | orchestrator | 2026-02-28 00:59:04.787088 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-28 00:59:04.787092 | orchestrator | Saturday 28 February 2026 00:48:12 +0000 (0:00:00.746) 0:01:04.562 ***** 2026-02-28 00:59:04.787097 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:59:04.787102 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:59:04.787107 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:59:04.787112 | orchestrator | 2026-02-28 00:59:04.787117 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-28 00:59:04.787122 | orchestrator | Saturday 28 February 2026 00:48:13 +0000 (0:00:01.028) 0:01:05.591 ***** 2026-02-28 00:59:04.787126 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-28 00:59:04.787131 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-28 00:59:04.787136 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-28 00:59:04.787144 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:59:04.787152 | orchestrator | 2026-02-28 00:59:04.787157 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-28 00:59:04.787162 | orchestrator | Saturday 28 February 2026 00:48:13 +0000 (0:00:00.494) 0:01:06.085 ***** 2026-02-28 00:59:04.787167 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-28 00:59:04.787172 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-28 00:59:04.787179 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-28 00:59:04.787191 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:59:04.787196 | orchestrator | 2026-02-28 00:59:04.787200 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-28 00:59:04.787205 | orchestrator | Saturday 28 February 2026 00:48:14 +0000 (0:00:00.473) 0:01:06.558 ***** 2026-02-28 00:59:04.787214 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-28 00:59:04.787219 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-28 00:59:04.787224 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-28 00:59:04.787229 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:59:04.787234 | orchestrator | 2026-02-28 00:59:04.787239 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-28 00:59:04.787244 | orchestrator | Saturday 28 February 2026 00:48:14 +0000 (0:00:00.443) 0:01:07.002 ***** 2026-02-28 00:59:04.787869 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:59:04.787905 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:59:04.787913 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:59:04.787920 | orchestrator | 2026-02-28 00:59:04.787928 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-28 00:59:04.787937 | orchestrator | Saturday 28 February 2026 00:48:15 +0000 (0:00:00.586) 0:01:07.589 ***** 2026-02-28 00:59:04.787944 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-02-28 00:59:04.787952 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-02-28 00:59:04.787993 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-02-28 00:59:04.787998 | orchestrator | 2026-02-28 00:59:04.788003 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-28 00:59:04.788008 | orchestrator | Saturday 28 February 2026 00:48:17 +0000 (0:00:02.009) 0:01:09.598 ***** 2026-02-28 00:59:04.788013 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-28 00:59:04.788019 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-28 00:59:04.788025 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-28 00:59:04.788029 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-02-28 00:59:04.788034 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-28 00:59:04.788039 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-28 00:59:04.788043 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-28 00:59:04.788048 | orchestrator | 2026-02-28 00:59:04.788052 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-28 00:59:04.788057 | orchestrator | Saturday 28 February 2026 00:48:18 +0000 (0:00:01.733) 0:01:11.332 ***** 2026-02-28 00:59:04.788062 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-28 00:59:04.788066 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-28 00:59:04.788071 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-28 00:59:04.788075 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-02-28 00:59:04.788080 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-28 00:59:04.788085 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-28 00:59:04.788089 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-28 00:59:04.788094 | orchestrator | 2026-02-28 00:59:04.788100 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-28 00:59:04.788107 | orchestrator | Saturday 28 February 2026 00:48:21 +0000 (0:00:02.383) 0:01:13.715 ***** 2026-02-28 00:59:04.788116 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:59:04.788130 | orchestrator | 2026-02-28 00:59:04.788135 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-28 00:59:04.788140 | orchestrator | Saturday 28 February 2026 00:48:22 +0000 (0:00:01.736) 0:01:15.452 ***** 2026-02-28 00:59:04.788144 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:59:04.788149 | orchestrator | 2026-02-28 00:59:04.788154 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-28 00:59:04.788158 | orchestrator | Saturday 28 February 2026 00:48:25 +0000 (0:00:02.276) 0:01:17.729 ***** 2026-02-28 00:59:04.788163 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:59:04.788168 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:59:04.788173 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:59:04.788178 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:59:04.788182 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:59:04.788187 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:59:04.788191 | orchestrator | 2026-02-28 00:59:04.788196 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-28 00:59:04.788201 | orchestrator | Saturday 28 February 2026 00:48:27 +0000 (0:00:02.177) 0:01:19.907 ***** 2026-02-28 00:59:04.788205 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:59:04.788210 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:59:04.788215 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:59:04.788219 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:59:04.788224 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:59:04.788228 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:59:04.788233 | orchestrator | 2026-02-28 00:59:04.788238 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-28 00:59:04.788243 | orchestrator | Saturday 28 February 2026 00:48:28 +0000 (0:00:00.764) 0:01:20.671 ***** 2026-02-28 00:59:04.788247 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:59:04.788252 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:59:04.788257 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:59:04.788262 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:59:04.788267 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:59:04.788271 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:59:04.788276 | orchestrator | 2026-02-28 00:59:04.788286 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-28 00:59:04.788498 | orchestrator | Saturday 28 February 2026 00:48:29 +0000 (0:00:01.566) 0:01:22.237 ***** 2026-02-28 00:59:04.788504 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:59:04.788508 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:59:04.788513 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:59:04.788518 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:59:04.788522 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:59:04.788527 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:59:04.788531 | orchestrator | 2026-02-28 00:59:04.788536 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-28 00:59:04.788541 | orchestrator | Saturday 28 February 2026 00:48:30 +0000 (0:00:01.197) 0:01:23.434 ***** 2026-02-28 00:59:04.788545 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:59:04.788550 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:59:04.788555 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:59:04.788559 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:59:04.788564 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:59:04.788642 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:59:04.788655 | orchestrator | 2026-02-28 00:59:04.788662 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-28 00:59:04.788669 | orchestrator | Saturday 28 February 2026 00:48:32 +0000 (0:00:01.746) 0:01:25.181 ***** 2026-02-28 00:59:04.788676 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:59:04.788683 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:59:04.788690 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:59:04.788706 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:59:04.788714 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:59:04.788721 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:59:04.788729 | orchestrator | 2026-02-28 00:59:04.788736 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-28 00:59:04.788743 | orchestrator | Saturday 28 February 2026 00:48:33 +0000 (0:00:00.831) 0:01:26.012 ***** 2026-02-28 00:59:04.788750 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:59:04.788757 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:59:04.788764 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:59:04.788771 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:59:04.788779 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:59:04.788787 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:59:04.788792 | orchestrator | 2026-02-28 00:59:04.788797 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-28 00:59:04.788801 | orchestrator | Saturday 28 February 2026 00:48:34 +0000 (0:00:01.242) 0:01:27.255 ***** 2026-02-28 00:59:04.788806 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:59:04.788811 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:59:04.788815 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:59:04.788820 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:59:04.788825 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:59:04.788830 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:59:04.788834 | orchestrator | 2026-02-28 00:59:04.788839 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-28 00:59:04.788844 | orchestrator | Saturday 28 February 2026 00:48:36 +0000 (0:00:01.400) 0:01:28.655 ***** 2026-02-28 00:59:04.788848 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:59:04.788853 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:59:04.788857 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:59:04.788862 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:59:04.788867 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:59:04.788871 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:59:04.788876 | orchestrator | 2026-02-28 00:59:04.788881 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-28 00:59:04.788885 | orchestrator | Saturday 28 February 2026 00:48:37 +0000 (0:00:01.726) 0:01:30.381 ***** 2026-02-28 00:59:04.788890 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:59:04.788895 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:59:04.788899 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:59:04.788904 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:59:04.788909 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:59:04.788914 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:59:04.788921 | orchestrator | 2026-02-28 00:59:04.788930 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-28 00:59:04.788941 | orchestrator | Saturday 28 February 2026 00:48:38 +0000 (0:00:00.804) 0:01:31.186 ***** 2026-02-28 00:59:04.788948 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:59:04.788955 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:59:04.788961 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:59:04.788968 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:59:04.788975 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:59:04.788982 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:59:04.788989 | orchestrator | 2026-02-28 00:59:04.788996 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-28 00:59:04.789003 | orchestrator | Saturday 28 February 2026 00:48:39 +0000 (0:00:01.064) 0:01:32.251 ***** 2026-02-28 00:59:04.789009 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:59:04.789016 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:59:04.789024 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:59:04.789031 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:59:04.789038 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:59:04.789045 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:59:04.789053 | orchestrator | 2026-02-28 00:59:04.789061 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-28 00:59:04.789076 | orchestrator | Saturday 28 February 2026 00:48:40 +0000 (0:00:00.670) 0:01:32.921 ***** 2026-02-28 00:59:04.789082 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:59:04.789089 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:59:04.789096 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:59:04.789104 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:59:04.789112 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:59:04.789119 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:59:04.789127 | orchestrator | 2026-02-28 00:59:04.789135 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-28 00:59:04.789142 | orchestrator | Saturday 28 February 2026 00:48:41 +0000 (0:00:00.950) 0:01:33.872 ***** 2026-02-28 00:59:04.789147 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:59:04.789151 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:59:04.789156 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:59:04.789161 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:59:04.789165 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:59:04.789176 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:59:04.789180 | orchestrator | 2026-02-28 00:59:04.789185 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-28 00:59:04.789190 | orchestrator | Saturday 28 February 2026 00:48:42 +0000 (0:00:00.691) 0:01:34.563 ***** 2026-02-28 00:59:04.789194 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:59:04.789199 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:59:04.789203 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:59:04.789208 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:59:04.789212 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:59:04.789217 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:59:04.789221 | orchestrator | 2026-02-28 00:59:04.789226 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-28 00:59:04.789231 | orchestrator | Saturday 28 February 2026 00:48:43 +0000 (0:00:01.329) 0:01:35.893 ***** 2026-02-28 00:59:04.789235 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:59:04.789240 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:59:04.789245 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:59:04.789251 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:59:04.789287 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:59:04.789293 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:59:04.789298 | orchestrator | 2026-02-28 00:59:04.789304 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-28 00:59:04.789309 | orchestrator | Saturday 28 February 2026 00:48:44 +0000 (0:00:00.907) 0:01:36.801 ***** 2026-02-28 00:59:04.789315 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:59:04.789320 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:59:04.789325 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:59:04.789330 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:59:04.789336 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:59:04.789341 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:59:04.789346 | orchestrator | 2026-02-28 00:59:04.789352 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-28 00:59:04.789357 | orchestrator | Saturday 28 February 2026 00:48:45 +0000 (0:00:01.276) 0:01:38.077 ***** 2026-02-28 00:59:04.789363 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:59:04.789368 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:59:04.789374 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:59:04.789379 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:59:04.789385 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:59:04.789390 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:59:04.789396 | orchestrator | 2026-02-28 00:59:04.789401 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-28 00:59:04.789406 | orchestrator | Saturday 28 February 2026 00:48:46 +0000 (0:00:01.052) 0:01:39.130 ***** 2026-02-28 00:59:04.789412 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:59:04.789417 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:59:04.789432 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:59:04.789438 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:59:04.789443 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:59:04.789449 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:59:04.789454 | orchestrator | 2026-02-28 00:59:04.789459 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-02-28 00:59:04.789465 | orchestrator | Saturday 28 February 2026 00:48:48 +0000 (0:00:01.890) 0:01:41.020 ***** 2026-02-28 00:59:04.789470 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:59:04.789476 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:59:04.789481 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:59:04.789486 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:59:04.789492 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:59:04.789497 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:59:04.789503 | orchestrator | 2026-02-28 00:59:04.789508 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-02-28 00:59:04.789514 | orchestrator | Saturday 28 February 2026 00:48:50 +0000 (0:00:01.890) 0:01:42.910 ***** 2026-02-28 00:59:04.789519 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:59:04.789525 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:59:04.789530 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:59:04.789535 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:59:04.789540 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:59:04.789546 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:59:04.789551 | orchestrator | 2026-02-28 00:59:04.789556 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-02-28 00:59:04.789562 | orchestrator | Saturday 28 February 2026 00:48:53 +0000 (0:00:02.611) 0:01:45.522 ***** 2026-02-28 00:59:04.789567 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:59:04.789573 | orchestrator | 2026-02-28 00:59:04.789579 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-02-28 00:59:04.789584 | orchestrator | Saturday 28 February 2026 00:48:54 +0000 (0:00:01.793) 0:01:47.315 ***** 2026-02-28 00:59:04.789589 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:59:04.789595 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:59:04.789600 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:59:04.789605 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:59:04.789610 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:59:04.789660 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:59:04.789665 | orchestrator | 2026-02-28 00:59:04.789670 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-02-28 00:59:04.789674 | orchestrator | Saturday 28 February 2026 00:48:55 +0000 (0:00:00.673) 0:01:47.989 ***** 2026-02-28 00:59:04.789679 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:59:04.789684 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:59:04.789688 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:59:04.789693 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:59:04.789697 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:59:04.789702 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:59:04.789706 | orchestrator | 2026-02-28 00:59:04.789711 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-02-28 00:59:04.789715 | orchestrator | Saturday 28 February 2026 00:48:56 +0000 (0:00:00.877) 0:01:48.867 ***** 2026-02-28 00:59:04.789720 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-28 00:59:04.789725 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-28 00:59:04.789733 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-28 00:59:04.789738 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-28 00:59:04.789743 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-28 00:59:04.789755 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-28 00:59:04.789762 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-28 00:59:04.789769 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-28 00:59:04.789774 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-28 00:59:04.789782 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-28 00:59:04.789811 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-28 00:59:04.789818 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-28 00:59:04.789824 | orchestrator | 2026-02-28 00:59:04.789831 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-02-28 00:59:04.789837 | orchestrator | Saturday 28 February 2026 00:48:57 +0000 (0:00:01.357) 0:01:50.225 ***** 2026-02-28 00:59:04.789844 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:59:04.789851 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:59:04.789858 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:59:04.789864 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:59:04.789872 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:59:04.789878 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:59:04.789885 | orchestrator | 2026-02-28 00:59:04.789892 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-02-28 00:59:04.789899 | orchestrator | Saturday 28 February 2026 00:48:59 +0000 (0:00:01.324) 0:01:51.550 ***** 2026-02-28 00:59:04.789906 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:59:04.789913 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:59:04.789919 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:59:04.789927 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:59:04.789931 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:59:04.789935 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:59:04.789939 | orchestrator | 2026-02-28 00:59:04.789944 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-02-28 00:59:04.789948 | orchestrator | Saturday 28 February 2026 00:48:59 +0000 (0:00:00.656) 0:01:52.206 ***** 2026-02-28 00:59:04.789952 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:59:04.789956 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:59:04.789961 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:59:04.789965 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:59:04.789969 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:59:04.789973 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:59:04.789977 | orchestrator | 2026-02-28 00:59:04.789981 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-02-28 00:59:04.789986 | orchestrator | Saturday 28 February 2026 00:49:00 +0000 (0:00:00.908) 0:01:53.114 ***** 2026-02-28 00:59:04.789990 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:59:04.789994 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:59:04.789998 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:59:04.790002 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:59:04.790006 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:59:04.790010 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:59:04.790038 | orchestrator | 2026-02-28 00:59:04.790043 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-02-28 00:59:04.790047 | orchestrator | Saturday 28 February 2026 00:49:01 +0000 (0:00:00.668) 0:01:53.782 ***** 2026-02-28 00:59:04.790052 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:59:04.790056 | orchestrator | 2026-02-28 00:59:04.790060 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-02-28 00:59:04.790070 | orchestrator | Saturday 28 February 2026 00:49:02 +0000 (0:00:01.327) 0:01:55.110 ***** 2026-02-28 00:59:04.790074 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:59:04.790078 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:59:04.790082 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:59:04.790086 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:59:04.790091 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:59:04.790095 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:59:04.790099 | orchestrator | 2026-02-28 00:59:04.790103 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-02-28 00:59:04.790107 | orchestrator | Saturday 28 February 2026 00:49:47 +0000 (0:00:44.435) 0:02:39.545 ***** 2026-02-28 00:59:04.790112 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-28 00:59:04.790116 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-28 00:59:04.790120 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-28 00:59:04.790124 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:59:04.790128 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-28 00:59:04.790133 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-28 00:59:04.790137 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-28 00:59:04.790141 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:59:04.790146 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-28 00:59:04.790150 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-28 00:59:04.790157 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-28 00:59:04.790163 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:59:04.790170 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-28 00:59:04.790181 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-28 00:59:04.790189 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-28 00:59:04.790195 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:59:04.790202 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-28 00:59:04.790209 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-28 00:59:04.790216 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-28 00:59:04.790222 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:59:04.790256 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-28 00:59:04.790264 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-28 00:59:04.790271 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-28 00:59:04.790278 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:59:04.790285 | orchestrator | 2026-02-28 00:59:04.790291 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-02-28 00:59:04.790295 | orchestrator | Saturday 28 February 2026 00:49:47 +0000 (0:00:00.801) 0:02:40.347 ***** 2026-02-28 00:59:04.790299 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:59:04.790304 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:59:04.790311 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:59:04.790318 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:59:04.790325 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:59:04.790331 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:59:04.790338 | orchestrator | 2026-02-28 00:59:04.790345 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-02-28 00:59:04.790352 | orchestrator | Saturday 28 February 2026 00:49:48 +0000 (0:00:00.895) 0:02:41.243 ***** 2026-02-28 00:59:04.790359 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:59:04.790373 | orchestrator | 2026-02-28 00:59:04.790379 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-02-28 00:59:04.790386 | orchestrator | Saturday 28 February 2026 00:49:48 +0000 (0:00:00.154) 0:02:41.397 ***** 2026-02-28 00:59:04.790392 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:59:04.790398 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:59:04.790405 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:59:04.790412 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:59:04.790420 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:59:04.790426 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:59:04.790433 | orchestrator | 2026-02-28 00:59:04.790440 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-02-28 00:59:04.790447 | orchestrator | Saturday 28 February 2026 00:49:49 +0000 (0:00:00.692) 0:02:42.090 ***** 2026-02-28 00:59:04.790454 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:59:04.790461 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:59:04.790468 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:59:04.790475 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:59:04.790482 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:59:04.790488 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:59:04.790494 | orchestrator | 2026-02-28 00:59:04.790501 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-02-28 00:59:04.790509 | orchestrator | Saturday 28 February 2026 00:49:50 +0000 (0:00:00.864) 0:02:42.954 ***** 2026-02-28 00:59:04.790516 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:59:04.790523 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:59:04.790530 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:59:04.790536 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:59:04.790543 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:59:04.790550 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:59:04.790557 | orchestrator | 2026-02-28 00:59:04.790564 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-02-28 00:59:04.790571 | orchestrator | Saturday 28 February 2026 00:49:51 +0000 (0:00:00.656) 0:02:43.611 ***** 2026-02-28 00:59:04.790578 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:59:04.790585 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:59:04.790592 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:59:04.790599 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:59:04.790605 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:59:04.790625 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:59:04.790633 | orchestrator | 2026-02-28 00:59:04.790640 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-02-28 00:59:04.790647 | orchestrator | Saturday 28 February 2026 00:49:53 +0000 (0:00:02.434) 0:02:46.045 ***** 2026-02-28 00:59:04.790654 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:59:04.790661 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:59:04.790669 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:59:04.790676 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:59:04.790683 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:59:04.790690 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:59:04.790697 | orchestrator | 2026-02-28 00:59:04.790704 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-02-28 00:59:04.790712 | orchestrator | Saturday 28 February 2026 00:49:54 +0000 (0:00:00.707) 0:02:46.752 ***** 2026-02-28 00:59:04.790720 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:59:04.790728 | orchestrator | 2026-02-28 00:59:04.790736 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-02-28 00:59:04.790743 | orchestrator | Saturday 28 February 2026 00:49:55 +0000 (0:00:01.475) 0:02:48.227 ***** 2026-02-28 00:59:04.790750 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:59:04.790758 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:59:04.790765 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:59:04.790777 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:59:04.790788 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:59:04.790796 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:59:04.790803 | orchestrator | 2026-02-28 00:59:04.790810 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-02-28 00:59:04.790817 | orchestrator | Saturday 28 February 2026 00:49:56 +0000 (0:00:00.824) 0:02:49.052 ***** 2026-02-28 00:59:04.790824 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:59:04.790832 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:59:04.790839 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:59:04.790846 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:59:04.790854 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:59:04.790861 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:59:04.790868 | orchestrator | 2026-02-28 00:59:04.790875 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-02-28 00:59:04.790883 | orchestrator | Saturday 28 February 2026 00:49:57 +0000 (0:00:00.581) 0:02:49.633 ***** 2026-02-28 00:59:04.790890 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:59:04.790897 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:59:04.790927 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:59:04.790935 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:59:04.790942 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:59:04.790949 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:59:04.790956 | orchestrator | 2026-02-28 00:59:04.790963 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-02-28 00:59:04.790970 | orchestrator | Saturday 28 February 2026 00:49:57 +0000 (0:00:00.830) 0:02:50.463 ***** 2026-02-28 00:59:04.790977 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:59:04.790984 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:59:04.790991 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:59:04.790998 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:59:04.791004 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:59:04.791012 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:59:04.791019 | orchestrator | 2026-02-28 00:59:04.791026 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-02-28 00:59:04.791033 | orchestrator | Saturday 28 February 2026 00:49:58 +0000 (0:00:00.603) 0:02:51.067 ***** 2026-02-28 00:59:04.791040 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:59:04.791047 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:59:04.791054 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:59:04.791061 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:59:04.791068 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:59:04.791075 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:59:04.791081 | orchestrator | 2026-02-28 00:59:04.791088 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-02-28 00:59:04.791096 | orchestrator | Saturday 28 February 2026 00:49:59 +0000 (0:00:00.971) 0:02:52.038 ***** 2026-02-28 00:59:04.791103 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:59:04.791110 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:59:04.791117 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:59:04.791124 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:59:04.791131 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:59:04.791138 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:59:04.791145 | orchestrator | 2026-02-28 00:59:04.791152 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-02-28 00:59:04.791158 | orchestrator | Saturday 28 February 2026 00:50:00 +0000 (0:00:00.773) 0:02:52.812 ***** 2026-02-28 00:59:04.791165 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:59:04.791172 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:59:04.791179 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:59:04.791185 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:59:04.791191 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:59:04.791197 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:59:04.791209 | orchestrator | 2026-02-28 00:59:04.791216 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-02-28 00:59:04.791224 | orchestrator | Saturday 28 February 2026 00:50:01 +0000 (0:00:01.001) 0:02:53.813 ***** 2026-02-28 00:59:04.791231 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:59:04.791238 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:59:04.791245 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:59:04.791252 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:59:04.791259 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:59:04.791266 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:59:04.791273 | orchestrator | 2026-02-28 00:59:04.791280 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-02-28 00:59:04.791287 | orchestrator | Saturday 28 February 2026 00:50:02 +0000 (0:00:00.769) 0:02:54.583 ***** 2026-02-28 00:59:04.791294 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:59:04.791300 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:59:04.791308 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:59:04.791315 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:59:04.791322 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:59:04.791329 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:59:04.791336 | orchestrator | 2026-02-28 00:59:04.791343 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-02-28 00:59:04.791350 | orchestrator | Saturday 28 February 2026 00:50:03 +0000 (0:00:01.447) 0:02:56.030 ***** 2026-02-28 00:59:04.791357 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:59:04.791365 | orchestrator | 2026-02-28 00:59:04.791372 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-02-28 00:59:04.791379 | orchestrator | Saturday 28 February 2026 00:50:05 +0000 (0:00:01.499) 0:02:57.530 ***** 2026-02-28 00:59:04.791386 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2026-02-28 00:59:04.791393 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2026-02-28 00:59:04.791400 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2026-02-28 00:59:04.791407 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2026-02-28 00:59:04.791414 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2026-02-28 00:59:04.791421 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2026-02-28 00:59:04.791427 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2026-02-28 00:59:04.791437 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2026-02-28 00:59:04.791444 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2026-02-28 00:59:04.791453 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2026-02-28 00:59:04.791457 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2026-02-28 00:59:04.791461 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2026-02-28 00:59:04.791465 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2026-02-28 00:59:04.791469 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2026-02-28 00:59:04.791474 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2026-02-28 00:59:04.791478 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2026-02-28 00:59:04.791482 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2026-02-28 00:59:04.791486 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2026-02-28 00:59:04.791508 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2026-02-28 00:59:04.791513 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2026-02-28 00:59:04.791518 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2026-02-28 00:59:04.791522 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2026-02-28 00:59:04.791526 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2026-02-28 00:59:04.791535 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2026-02-28 00:59:04.791540 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2026-02-28 00:59:04.791544 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2026-02-28 00:59:04.791548 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2026-02-28 00:59:04.791552 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2026-02-28 00:59:04.791556 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2026-02-28 00:59:04.791560 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2026-02-28 00:59:04.791565 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2026-02-28 00:59:04.791569 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2026-02-28 00:59:04.791573 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2026-02-28 00:59:04.791577 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/crash) 2026-02-28 00:59:04.791581 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/crash) 2026-02-28 00:59:04.791586 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2026-02-28 00:59:04.791590 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/crash) 2026-02-28 00:59:04.791594 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2026-02-28 00:59:04.791598 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2026-02-28 00:59:04.791602 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2026-02-28 00:59:04.791607 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2026-02-28 00:59:04.791631 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/crash) 2026-02-28 00:59:04.791636 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/crash) 2026-02-28 00:59:04.791640 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/crash) 2026-02-28 00:59:04.791644 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2026-02-28 00:59:04.791649 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-28 00:59:04.791653 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-28 00:59:04.791657 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2026-02-28 00:59:04.791661 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2026-02-28 00:59:04.791665 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2026-02-28 00:59:04.791670 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-28 00:59:04.791674 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-28 00:59:04.791678 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-28 00:59:04.791682 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-28 00:59:04.791686 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-28 00:59:04.791690 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-28 00:59:04.791694 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-28 00:59:04.791699 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-28 00:59:04.791703 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-28 00:59:04.791707 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-28 00:59:04.791711 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-28 00:59:04.791715 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-28 00:59:04.791719 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-28 00:59:04.791723 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-28 00:59:04.791727 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-28 00:59:04.791735 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-28 00:59:04.791739 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-28 00:59:04.791747 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-28 00:59:04.791752 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-28 00:59:04.791756 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-28 00:59:04.791760 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-28 00:59:04.791764 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-28 00:59:04.791768 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-28 00:59:04.791772 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-28 00:59:04.791776 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-28 00:59:04.791781 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-28 00:59:04.791798 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-28 00:59:04.791804 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2026-02-28 00:59:04.791808 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-28 00:59:04.791812 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-28 00:59:04.791816 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-28 00:59:04.791820 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-28 00:59:04.791825 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-28 00:59:04.791829 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2026-02-28 00:59:04.791833 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2026-02-28 00:59:04.791837 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-28 00:59:04.791842 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2026-02-28 00:59:04.791846 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2026-02-28 00:59:04.791850 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-28 00:59:04.791854 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2026-02-28 00:59:04.791858 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2026-02-28 00:59:04.791863 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2026-02-28 00:59:04.791867 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2026-02-28 00:59:04.791871 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2026-02-28 00:59:04.791875 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2026-02-28 00:59:04.791879 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2026-02-28 00:59:04.791883 | orchestrator | 2026-02-28 00:59:04.791888 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-02-28 00:59:04.791892 | orchestrator | Saturday 28 February 2026 00:50:11 +0000 (0:00:06.779) 0:03:04.310 ***** 2026-02-28 00:59:04.791896 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:59:04.791900 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:59:04.791904 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:59:04.791909 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-28 00:59:04.791914 | orchestrator | 2026-02-28 00:59:04.791918 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-02-28 00:59:04.791922 | orchestrator | Saturday 28 February 2026 00:50:12 +0000 (0:00:01.038) 0:03:05.349 ***** 2026-02-28 00:59:04.791927 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-28 00:59:04.791936 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-28 00:59:04.791940 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-28 00:59:04.791944 | orchestrator | 2026-02-28 00:59:04.791949 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-02-28 00:59:04.791953 | orchestrator | Saturday 28 February 2026 00:50:13 +0000 (0:00:01.114) 0:03:06.464 ***** 2026-02-28 00:59:04.791957 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-28 00:59:04.791961 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-28 00:59:04.791966 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-28 00:59:04.791970 | orchestrator | 2026-02-28 00:59:04.791974 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-02-28 00:59:04.791978 | orchestrator | Saturday 28 February 2026 00:50:15 +0000 (0:00:01.594) 0:03:08.058 ***** 2026-02-28 00:59:04.791983 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:59:04.791987 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:59:04.791991 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:59:04.791995 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:59:04.791999 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:59:04.792004 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:59:04.792008 | orchestrator | 2026-02-28 00:59:04.792012 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-02-28 00:59:04.792019 | orchestrator | Saturday 28 February 2026 00:50:16 +0000 (0:00:00.852) 0:03:08.911 ***** 2026-02-28 00:59:04.792023 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:59:04.792028 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:59:04.792032 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:59:04.792036 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:59:04.792040 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:59:04.792044 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:59:04.792049 | orchestrator | 2026-02-28 00:59:04.792053 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-02-28 00:59:04.792057 | orchestrator | Saturday 28 February 2026 00:50:17 +0000 (0:00:01.118) 0:03:10.029 ***** 2026-02-28 00:59:04.792061 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:59:04.792065 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:59:04.792069 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:59:04.792074 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:59:04.792078 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:59:04.792082 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:59:04.792086 | orchestrator | 2026-02-28 00:59:04.792104 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-02-28 00:59:04.792109 | orchestrator | Saturday 28 February 2026 00:50:18 +0000 (0:00:00.761) 0:03:10.790 ***** 2026-02-28 00:59:04.792113 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:59:04.792117 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:59:04.792121 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:59:04.792125 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:59:04.792130 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:59:04.792134 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:59:04.792138 | orchestrator | 2026-02-28 00:59:04.792142 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-02-28 00:59:04.792146 | orchestrator | Saturday 28 February 2026 00:50:19 +0000 (0:00:00.959) 0:03:11.750 ***** 2026-02-28 00:59:04.792151 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:59:04.792161 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:59:04.792165 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:59:04.792169 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:59:04.792173 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:59:04.792178 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:59:04.792182 | orchestrator | 2026-02-28 00:59:04.792186 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-02-28 00:59:04.792190 | orchestrator | Saturday 28 February 2026 00:50:19 +0000 (0:00:00.701) 0:03:12.451 ***** 2026-02-28 00:59:04.792194 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:59:04.792199 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:59:04.792203 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:59:04.792207 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:59:04.792211 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:59:04.792215 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:59:04.792219 | orchestrator | 2026-02-28 00:59:04.792223 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-02-28 00:59:04.792228 | orchestrator | Saturday 28 February 2026 00:50:20 +0000 (0:00:01.033) 0:03:13.485 ***** 2026-02-28 00:59:04.792232 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:59:04.792236 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:59:04.792240 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:59:04.792245 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:59:04.792249 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:59:04.792253 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:59:04.792257 | orchestrator | 2026-02-28 00:59:04.792261 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-02-28 00:59:04.792265 | orchestrator | Saturday 28 February 2026 00:50:21 +0000 (0:00:00.620) 0:03:14.105 ***** 2026-02-28 00:59:04.792270 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:59:04.792274 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:59:04.792278 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:59:04.792282 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:59:04.792286 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:59:04.792290 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:59:04.792294 | orchestrator | 2026-02-28 00:59:04.792299 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-02-28 00:59:04.792303 | orchestrator | Saturday 28 February 2026 00:50:22 +0000 (0:00:00.821) 0:03:14.926 ***** 2026-02-28 00:59:04.792307 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:59:04.792311 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:59:04.792316 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:59:04.792320 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:59:04.792324 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:59:04.792328 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:59:04.792332 | orchestrator | 2026-02-28 00:59:04.792336 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-02-28 00:59:04.792341 | orchestrator | Saturday 28 February 2026 00:50:25 +0000 (0:00:03.516) 0:03:18.443 ***** 2026-02-28 00:59:04.792345 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:59:04.792349 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:59:04.792353 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:59:04.792357 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:59:04.792362 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:59:04.792366 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:59:04.792370 | orchestrator | 2026-02-28 00:59:04.792374 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-02-28 00:59:04.792378 | orchestrator | Saturday 28 February 2026 00:50:27 +0000 (0:00:01.158) 0:03:19.601 ***** 2026-02-28 00:59:04.792383 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:59:04.792387 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:59:04.792391 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:59:04.792395 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:59:04.792404 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:59:04.792408 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:59:04.792412 | orchestrator | 2026-02-28 00:59:04.792416 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-02-28 00:59:04.792420 | orchestrator | Saturday 28 February 2026 00:50:28 +0000 (0:00:01.215) 0:03:20.817 ***** 2026-02-28 00:59:04.792425 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:59:04.792429 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:59:04.792433 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:59:04.792440 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:59:04.792445 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:59:04.792449 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:59:04.792453 | orchestrator | 2026-02-28 00:59:04.792458 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-02-28 00:59:04.792462 | orchestrator | Saturday 28 February 2026 00:50:29 +0000 (0:00:01.162) 0:03:21.979 ***** 2026-02-28 00:59:04.792466 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-28 00:59:04.792470 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-28 00:59:04.792475 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-28 00:59:04.792479 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:59:04.792496 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:59:04.792501 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:59:04.792505 | orchestrator | 2026-02-28 00:59:04.792510 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-02-28 00:59:04.792514 | orchestrator | Saturday 28 February 2026 00:50:30 +0000 (0:00:00.839) 0:03:22.819 ***** 2026-02-28 00:59:04.792520 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}])  2026-02-28 00:59:04.792527 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}])  2026-02-28 00:59:04.792532 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:59:04.792536 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}])  2026-02-28 00:59:04.792540 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}])  2026-02-28 00:59:04.792545 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:59:04.792549 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}])  2026-02-28 00:59:04.792554 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}])  2026-02-28 00:59:04.792561 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:59:04.792565 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:59:04.792569 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:59:04.792573 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:59:04.792578 | orchestrator | 2026-02-28 00:59:04.792582 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-02-28 00:59:04.792586 | orchestrator | Saturday 28 February 2026 00:50:31 +0000 (0:00:01.252) 0:03:24.071 ***** 2026-02-28 00:59:04.792590 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:59:04.792594 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:59:04.792598 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:59:04.792602 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:59:04.792607 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:59:04.792643 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:59:04.792648 | orchestrator | 2026-02-28 00:59:04.792653 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-02-28 00:59:04.792657 | orchestrator | Saturday 28 February 2026 00:50:32 +0000 (0:00:00.616) 0:03:24.687 ***** 2026-02-28 00:59:04.792661 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:59:04.792665 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:59:04.792669 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:59:04.792673 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:59:04.792678 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:59:04.792682 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:59:04.792686 | orchestrator | 2026-02-28 00:59:04.792690 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-28 00:59:04.792694 | orchestrator | Saturday 28 February 2026 00:50:32 +0000 (0:00:00.805) 0:03:25.493 ***** 2026-02-28 00:59:04.792702 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:59:04.792706 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:59:04.792710 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:59:04.792714 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:59:04.792718 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:59:04.792723 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:59:04.792727 | orchestrator | 2026-02-28 00:59:04.792731 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-28 00:59:04.792735 | orchestrator | Saturday 28 February 2026 00:50:33 +0000 (0:00:00.781) 0:03:26.274 ***** 2026-02-28 00:59:04.792740 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:59:04.792744 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:59:04.792748 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:59:04.792752 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:59:04.792756 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:59:04.792761 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:59:04.792765 | orchestrator | 2026-02-28 00:59:04.792769 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-28 00:59:04.792787 | orchestrator | Saturday 28 February 2026 00:50:34 +0000 (0:00:00.879) 0:03:27.154 ***** 2026-02-28 00:59:04.792792 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:59:04.792796 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:59:04.792800 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:59:04.792805 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:59:04.792809 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:59:04.792813 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:59:04.792817 | orchestrator | 2026-02-28 00:59:04.792821 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-28 00:59:04.792825 | orchestrator | Saturday 28 February 2026 00:50:35 +0000 (0:00:00.948) 0:03:28.102 ***** 2026-02-28 00:59:04.792830 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:59:04.792834 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:59:04.792842 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:59:04.792846 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:59:04.792850 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:59:04.792854 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:59:04.792859 | orchestrator | 2026-02-28 00:59:04.792863 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-28 00:59:04.792867 | orchestrator | Saturday 28 February 2026 00:50:37 +0000 (0:00:01.518) 0:03:29.620 ***** 2026-02-28 00:59:04.792871 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-28 00:59:04.792875 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-28 00:59:04.792880 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-28 00:59:04.792884 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:59:04.792888 | orchestrator | 2026-02-28 00:59:04.792892 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-28 00:59:04.792896 | orchestrator | Saturday 28 February 2026 00:50:37 +0000 (0:00:00.540) 0:03:30.161 ***** 2026-02-28 00:59:04.792901 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-28 00:59:04.792905 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-28 00:59:04.792909 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-28 00:59:04.792913 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:59:04.792917 | orchestrator | 2026-02-28 00:59:04.792921 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-28 00:59:04.792926 | orchestrator | Saturday 28 February 2026 00:50:38 +0000 (0:00:00.672) 0:03:30.833 ***** 2026-02-28 00:59:04.792930 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-28 00:59:04.792934 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-28 00:59:04.792938 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-28 00:59:04.792942 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:59:04.792947 | orchestrator | 2026-02-28 00:59:04.792951 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-28 00:59:04.792955 | orchestrator | Saturday 28 February 2026 00:50:39 +0000 (0:00:00.685) 0:03:31.519 ***** 2026-02-28 00:59:04.792959 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:59:04.792963 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:59:04.792968 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:59:04.792972 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:59:04.792976 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:59:04.792980 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:59:04.792984 | orchestrator | 2026-02-28 00:59:04.792988 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-28 00:59:04.792993 | orchestrator | Saturday 28 February 2026 00:50:39 +0000 (0:00:00.960) 0:03:32.479 ***** 2026-02-28 00:59:04.792997 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-02-28 00:59:04.793001 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-02-28 00:59:04.793005 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-02-28 00:59:04.793010 | orchestrator | skipping: [testbed-node-0] => (item=0)  2026-02-28 00:59:04.793014 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:59:04.793018 | orchestrator | skipping: [testbed-node-1] => (item=0)  2026-02-28 00:59:04.793022 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:59:04.793026 | orchestrator | skipping: [testbed-node-2] => (item=0)  2026-02-28 00:59:04.793030 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:59:04.793035 | orchestrator | 2026-02-28 00:59:04.793039 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-02-28 00:59:04.793043 | orchestrator | Saturday 28 February 2026 00:50:42 +0000 (0:00:02.678) 0:03:35.158 ***** 2026-02-28 00:59:04.793047 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:59:04.793051 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:59:04.793055 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:59:04.793060 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:59:04.793067 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:59:04.793071 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:59:04.793075 | orchestrator | 2026-02-28 00:59:04.793079 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-02-28 00:59:04.793084 | orchestrator | Saturday 28 February 2026 00:50:47 +0000 (0:00:04.581) 0:03:39.739 ***** 2026-02-28 00:59:04.793088 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:59:04.793094 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:59:04.793099 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:59:04.793103 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:59:04.793107 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:59:04.793111 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:59:04.793115 | orchestrator | 2026-02-28 00:59:04.793119 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-02-28 00:59:04.793124 | orchestrator | Saturday 28 February 2026 00:50:49 +0000 (0:00:01.964) 0:03:41.704 ***** 2026-02-28 00:59:04.793128 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:59:04.793132 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:59:04.793136 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:59:04.793140 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:59:04.793145 | orchestrator | 2026-02-28 00:59:04.793149 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-02-28 00:59:04.793167 | orchestrator | Saturday 28 February 2026 00:50:50 +0000 (0:00:00.999) 0:03:42.704 ***** 2026-02-28 00:59:04.793172 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:59:04.793176 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:59:04.793180 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:59:04.793184 | orchestrator | 2026-02-28 00:59:04.793188 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-02-28 00:59:04.793192 | orchestrator | Saturday 28 February 2026 00:50:50 +0000 (0:00:00.629) 0:03:43.334 ***** 2026-02-28 00:59:04.793195 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:59:04.793199 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:59:04.793203 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:59:04.793207 | orchestrator | 2026-02-28 00:59:04.793211 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-02-28 00:59:04.793214 | orchestrator | Saturday 28 February 2026 00:50:52 +0000 (0:00:01.597) 0:03:44.932 ***** 2026-02-28 00:59:04.793218 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-28 00:59:04.793222 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-28 00:59:04.793226 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-28 00:59:04.793230 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:59:04.793234 | orchestrator | 2026-02-28 00:59:04.793238 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-02-28 00:59:04.793241 | orchestrator | Saturday 28 February 2026 00:50:53 +0000 (0:00:00.692) 0:03:45.625 ***** 2026-02-28 00:59:04.793245 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:59:04.793249 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:59:04.793253 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:59:04.793257 | orchestrator | 2026-02-28 00:59:04.793261 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-02-28 00:59:04.793265 | orchestrator | Saturday 28 February 2026 00:50:53 +0000 (0:00:00.608) 0:03:46.233 ***** 2026-02-28 00:59:04.793268 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:59:04.793272 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:59:04.793276 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:59:04.793280 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-28 00:59:04.793284 | orchestrator | 2026-02-28 00:59:04.793288 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-02-28 00:59:04.793292 | orchestrator | Saturday 28 February 2026 00:50:55 +0000 (0:00:01.384) 0:03:47.618 ***** 2026-02-28 00:59:04.793299 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-28 00:59:04.793303 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-28 00:59:04.793307 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-28 00:59:04.793311 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:59:04.793314 | orchestrator | 2026-02-28 00:59:04.793318 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-02-28 00:59:04.793322 | orchestrator | Saturday 28 February 2026 00:50:55 +0000 (0:00:00.558) 0:03:48.176 ***** 2026-02-28 00:59:04.793326 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:59:04.793330 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:59:04.793334 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:59:04.793337 | orchestrator | 2026-02-28 00:59:04.793341 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-02-28 00:59:04.793345 | orchestrator | Saturday 28 February 2026 00:50:56 +0000 (0:00:00.374) 0:03:48.552 ***** 2026-02-28 00:59:04.793349 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:59:04.793353 | orchestrator | 2026-02-28 00:59:04.793356 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-02-28 00:59:04.793360 | orchestrator | Saturday 28 February 2026 00:50:56 +0000 (0:00:00.243) 0:03:48.795 ***** 2026-02-28 00:59:04.793364 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:59:04.793371 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:59:04.793377 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:59:04.793383 | orchestrator | 2026-02-28 00:59:04.793390 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-02-28 00:59:04.793395 | orchestrator | Saturday 28 February 2026 00:50:56 +0000 (0:00:00.394) 0:03:49.189 ***** 2026-02-28 00:59:04.793401 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:59:04.793407 | orchestrator | 2026-02-28 00:59:04.793412 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-02-28 00:59:04.793418 | orchestrator | Saturday 28 February 2026 00:50:56 +0000 (0:00:00.246) 0:03:49.435 ***** 2026-02-28 00:59:04.793423 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:59:04.793429 | orchestrator | 2026-02-28 00:59:04.793434 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-02-28 00:59:04.793440 | orchestrator | Saturday 28 February 2026 00:50:57 +0000 (0:00:00.245) 0:03:49.681 ***** 2026-02-28 00:59:04.793446 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:59:04.793452 | orchestrator | 2026-02-28 00:59:04.793458 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-02-28 00:59:04.793463 | orchestrator | Saturday 28 February 2026 00:50:57 +0000 (0:00:00.126) 0:03:49.807 ***** 2026-02-28 00:59:04.793469 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:59:04.793475 | orchestrator | 2026-02-28 00:59:04.793484 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-02-28 00:59:04.793490 | orchestrator | Saturday 28 February 2026 00:50:58 +0000 (0:00:00.876) 0:03:50.684 ***** 2026-02-28 00:59:04.793496 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:59:04.793502 | orchestrator | 2026-02-28 00:59:04.793507 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-02-28 00:59:04.793513 | orchestrator | Saturday 28 February 2026 00:50:58 +0000 (0:00:00.275) 0:03:50.959 ***** 2026-02-28 00:59:04.793519 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-28 00:59:04.793525 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-28 00:59:04.793531 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-28 00:59:04.793537 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:59:04.793543 | orchestrator | 2026-02-28 00:59:04.793549 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-02-28 00:59:04.793579 | orchestrator | Saturday 28 February 2026 00:50:58 +0000 (0:00:00.496) 0:03:51.456 ***** 2026-02-28 00:59:04.793585 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:59:04.793594 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:59:04.793598 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:59:04.793601 | orchestrator | 2026-02-28 00:59:04.793605 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-02-28 00:59:04.793609 | orchestrator | Saturday 28 February 2026 00:50:59 +0000 (0:00:00.399) 0:03:51.856 ***** 2026-02-28 00:59:04.793627 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:59:04.793631 | orchestrator | 2026-02-28 00:59:04.793634 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-02-28 00:59:04.793638 | orchestrator | Saturday 28 February 2026 00:50:59 +0000 (0:00:00.251) 0:03:52.108 ***** 2026-02-28 00:59:04.793642 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:59:04.793646 | orchestrator | 2026-02-28 00:59:04.793650 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-02-28 00:59:04.793653 | orchestrator | Saturday 28 February 2026 00:50:59 +0000 (0:00:00.301) 0:03:52.409 ***** 2026-02-28 00:59:04.793657 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:59:04.793673 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:59:04.793677 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:59:04.793681 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-28 00:59:04.793685 | orchestrator | 2026-02-28 00:59:04.793688 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-02-28 00:59:04.793692 | orchestrator | Saturday 28 February 2026 00:51:01 +0000 (0:00:01.411) 0:03:53.821 ***** 2026-02-28 00:59:04.793696 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:59:04.793700 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:59:04.793704 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:59:04.793708 | orchestrator | 2026-02-28 00:59:04.793712 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-02-28 00:59:04.793715 | orchestrator | Saturday 28 February 2026 00:51:01 +0000 (0:00:00.429) 0:03:54.250 ***** 2026-02-28 00:59:04.793719 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:59:04.793723 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:59:04.793727 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:59:04.793731 | orchestrator | 2026-02-28 00:59:04.793735 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-02-28 00:59:04.793739 | orchestrator | Saturday 28 February 2026 00:51:03 +0000 (0:00:01.455) 0:03:55.706 ***** 2026-02-28 00:59:04.793745 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-28 00:59:04.793751 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-28 00:59:04.793757 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-28 00:59:04.793762 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:59:04.793768 | orchestrator | 2026-02-28 00:59:04.793773 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-02-28 00:59:04.793779 | orchestrator | Saturday 28 February 2026 00:51:04 +0000 (0:00:01.037) 0:03:56.744 ***** 2026-02-28 00:59:04.793784 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:59:04.793790 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:59:04.793795 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:59:04.793801 | orchestrator | 2026-02-28 00:59:04.793807 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-02-28 00:59:04.793814 | orchestrator | Saturday 28 February 2026 00:51:04 +0000 (0:00:00.704) 0:03:57.449 ***** 2026-02-28 00:59:04.793820 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:59:04.793826 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:59:04.793832 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:59:04.793838 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-28 00:59:04.793845 | orchestrator | 2026-02-28 00:59:04.793850 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-02-28 00:59:04.793854 | orchestrator | Saturday 28 February 2026 00:51:06 +0000 (0:00:01.217) 0:03:58.666 ***** 2026-02-28 00:59:04.793862 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:59:04.793866 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:59:04.793870 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:59:04.793873 | orchestrator | 2026-02-28 00:59:04.793877 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-02-28 00:59:04.793881 | orchestrator | Saturday 28 February 2026 00:51:06 +0000 (0:00:00.750) 0:03:59.417 ***** 2026-02-28 00:59:04.793885 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:59:04.793889 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:59:04.793892 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:59:04.793896 | orchestrator | 2026-02-28 00:59:04.793900 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-02-28 00:59:04.793904 | orchestrator | Saturday 28 February 2026 00:51:08 +0000 (0:00:01.593) 0:04:01.011 ***** 2026-02-28 00:59:04.793907 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-28 00:59:04.793920 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-28 00:59:04.793928 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-28 00:59:04.793932 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:59:04.793935 | orchestrator | 2026-02-28 00:59:04.793939 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-02-28 00:59:04.793943 | orchestrator | Saturday 28 February 2026 00:51:09 +0000 (0:00:00.706) 0:04:01.717 ***** 2026-02-28 00:59:04.793947 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:59:04.793951 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:59:04.793955 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:59:04.793958 | orchestrator | 2026-02-28 00:59:04.793962 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2026-02-28 00:59:04.793966 | orchestrator | Saturday 28 February 2026 00:51:09 +0000 (0:00:00.464) 0:04:02.181 ***** 2026-02-28 00:59:04.793970 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:59:04.793974 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:59:04.793978 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:59:04.793981 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:59:04.793985 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:59:04.794008 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:59:04.794032 | orchestrator | 2026-02-28 00:59:04.794037 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-02-28 00:59:04.794041 | orchestrator | Saturday 28 February 2026 00:51:10 +0000 (0:00:01.036) 0:04:03.217 ***** 2026-02-28 00:59:04.794045 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:59:04.794049 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:59:04.794052 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:59:04.794056 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:59:04.794060 | orchestrator | 2026-02-28 00:59:04.794064 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-02-28 00:59:04.794068 | orchestrator | Saturday 28 February 2026 00:51:11 +0000 (0:00:01.061) 0:04:04.279 ***** 2026-02-28 00:59:04.794072 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:59:04.794076 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:59:04.794080 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:59:04.794083 | orchestrator | 2026-02-28 00:59:04.794087 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-02-28 00:59:04.794091 | orchestrator | Saturday 28 February 2026 00:51:12 +0000 (0:00:00.731) 0:04:05.011 ***** 2026-02-28 00:59:04.794095 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:59:04.794099 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:59:04.794102 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:59:04.794106 | orchestrator | 2026-02-28 00:59:04.794110 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-02-28 00:59:04.794114 | orchestrator | Saturday 28 February 2026 00:51:13 +0000 (0:00:01.448) 0:04:06.460 ***** 2026-02-28 00:59:04.794121 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-28 00:59:04.794125 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-28 00:59:04.794129 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-28 00:59:04.794132 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:59:04.794136 | orchestrator | 2026-02-28 00:59:04.794140 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-02-28 00:59:04.794144 | orchestrator | Saturday 28 February 2026 00:51:14 +0000 (0:00:00.739) 0:04:07.200 ***** 2026-02-28 00:59:04.794147 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:59:04.794151 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:59:04.794155 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:59:04.794159 | orchestrator | 2026-02-28 00:59:04.794163 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2026-02-28 00:59:04.794166 | orchestrator | 2026-02-28 00:59:04.794170 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-28 00:59:04.794174 | orchestrator | Saturday 28 February 2026 00:51:15 +0000 (0:00:00.810) 0:04:08.010 ***** 2026-02-28 00:59:04.794178 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:59:04.794182 | orchestrator | 2026-02-28 00:59:04.794186 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-28 00:59:04.794189 | orchestrator | Saturday 28 February 2026 00:51:16 +0000 (0:00:00.955) 0:04:08.965 ***** 2026-02-28 00:59:04.794193 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:59:04.794197 | orchestrator | 2026-02-28 00:59:04.794201 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-28 00:59:04.794205 | orchestrator | Saturday 28 February 2026 00:51:17 +0000 (0:00:00.640) 0:04:09.606 ***** 2026-02-28 00:59:04.794208 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:59:04.794212 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:59:04.794216 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:59:04.794220 | orchestrator | 2026-02-28 00:59:04.794223 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-28 00:59:04.794227 | orchestrator | Saturday 28 February 2026 00:51:18 +0000 (0:00:01.066) 0:04:10.672 ***** 2026-02-28 00:59:04.794231 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:59:04.794235 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:59:04.794238 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:59:04.794242 | orchestrator | 2026-02-28 00:59:04.794246 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-28 00:59:04.794250 | orchestrator | Saturday 28 February 2026 00:51:18 +0000 (0:00:00.392) 0:04:11.065 ***** 2026-02-28 00:59:04.794254 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:59:04.794257 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:59:04.794261 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:59:04.794265 | orchestrator | 2026-02-28 00:59:04.794269 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-28 00:59:04.794272 | orchestrator | Saturday 28 February 2026 00:51:18 +0000 (0:00:00.361) 0:04:11.427 ***** 2026-02-28 00:59:04.794276 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:59:04.794280 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:59:04.794284 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:59:04.794287 | orchestrator | 2026-02-28 00:59:04.794294 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-28 00:59:04.794298 | orchestrator | Saturday 28 February 2026 00:51:19 +0000 (0:00:00.352) 0:04:11.780 ***** 2026-02-28 00:59:04.794302 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:59:04.794305 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:59:04.794309 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:59:04.794313 | orchestrator | 2026-02-28 00:59:04.794317 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-28 00:59:04.794325 | orchestrator | Saturday 28 February 2026 00:51:20 +0000 (0:00:01.077) 0:04:12.857 ***** 2026-02-28 00:59:04.794330 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:59:04.794336 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:59:04.794342 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:59:04.794347 | orchestrator | 2026-02-28 00:59:04.794352 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-28 00:59:04.794358 | orchestrator | Saturday 28 February 2026 00:51:20 +0000 (0:00:00.358) 0:04:13.216 ***** 2026-02-28 00:59:04.794382 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:59:04.794389 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:59:04.794395 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:59:04.794402 | orchestrator | 2026-02-28 00:59:04.794408 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-28 00:59:04.794415 | orchestrator | Saturday 28 February 2026 00:51:21 +0000 (0:00:00.344) 0:04:13.560 ***** 2026-02-28 00:59:04.794419 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:59:04.794423 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:59:04.794427 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:59:04.794431 | orchestrator | 2026-02-28 00:59:04.794434 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-28 00:59:04.794438 | orchestrator | Saturday 28 February 2026 00:51:21 +0000 (0:00:00.861) 0:04:14.422 ***** 2026-02-28 00:59:04.794442 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:59:04.794446 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:59:04.794450 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:59:04.794453 | orchestrator | 2026-02-28 00:59:04.794457 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-28 00:59:04.794461 | orchestrator | Saturday 28 February 2026 00:51:23 +0000 (0:00:01.116) 0:04:15.539 ***** 2026-02-28 00:59:04.794465 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:59:04.794469 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:59:04.794473 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:59:04.794476 | orchestrator | 2026-02-28 00:59:04.794480 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-28 00:59:04.794484 | orchestrator | Saturday 28 February 2026 00:51:23 +0000 (0:00:00.316) 0:04:15.855 ***** 2026-02-28 00:59:04.794488 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:59:04.794492 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:59:04.794496 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:59:04.794499 | orchestrator | 2026-02-28 00:59:04.794503 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-28 00:59:04.794507 | orchestrator | Saturday 28 February 2026 00:51:23 +0000 (0:00:00.510) 0:04:16.366 ***** 2026-02-28 00:59:04.794511 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:59:04.794515 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:59:04.794519 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:59:04.794522 | orchestrator | 2026-02-28 00:59:04.794526 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-28 00:59:04.794530 | orchestrator | Saturday 28 February 2026 00:51:24 +0000 (0:00:00.359) 0:04:16.725 ***** 2026-02-28 00:59:04.794534 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:59:04.794538 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:59:04.794541 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:59:04.794545 | orchestrator | 2026-02-28 00:59:04.794549 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-28 00:59:04.794553 | orchestrator | Saturday 28 February 2026 00:51:24 +0000 (0:00:00.346) 0:04:17.072 ***** 2026-02-28 00:59:04.794557 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:59:04.794561 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:59:04.794564 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:59:04.794568 | orchestrator | 2026-02-28 00:59:04.794572 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-28 00:59:04.794576 | orchestrator | Saturday 28 February 2026 00:51:25 +0000 (0:00:00.704) 0:04:17.777 ***** 2026-02-28 00:59:04.794584 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:59:04.794588 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:59:04.794592 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:59:04.794596 | orchestrator | 2026-02-28 00:59:04.794599 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-28 00:59:04.794603 | orchestrator | Saturday 28 February 2026 00:51:25 +0000 (0:00:00.376) 0:04:18.153 ***** 2026-02-28 00:59:04.794607 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:59:04.794642 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:59:04.794646 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:59:04.794650 | orchestrator | 2026-02-28 00:59:04.794654 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-28 00:59:04.794658 | orchestrator | Saturday 28 February 2026 00:51:26 +0000 (0:00:00.354) 0:04:18.508 ***** 2026-02-28 00:59:04.794662 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:59:04.794665 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:59:04.794669 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:59:04.794673 | orchestrator | 2026-02-28 00:59:04.794677 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-28 00:59:04.794681 | orchestrator | Saturday 28 February 2026 00:51:26 +0000 (0:00:00.425) 0:04:18.933 ***** 2026-02-28 00:59:04.794684 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:59:04.794688 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:59:04.794692 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:59:04.794696 | orchestrator | 2026-02-28 00:59:04.794700 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-28 00:59:04.794703 | orchestrator | Saturday 28 February 2026 00:51:27 +0000 (0:00:00.697) 0:04:19.630 ***** 2026-02-28 00:59:04.794707 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:59:04.794711 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:59:04.794715 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:59:04.794718 | orchestrator | 2026-02-28 00:59:04.794726 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2026-02-28 00:59:04.794730 | orchestrator | Saturday 28 February 2026 00:51:27 +0000 (0:00:00.646) 0:04:20.276 ***** 2026-02-28 00:59:04.794734 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:59:04.794738 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:59:04.794741 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:59:04.794745 | orchestrator | 2026-02-28 00:59:04.794749 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2026-02-28 00:59:04.794753 | orchestrator | Saturday 28 February 2026 00:51:28 +0000 (0:00:00.408) 0:04:20.685 ***** 2026-02-28 00:59:04.794756 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:59:04.794760 | orchestrator | 2026-02-28 00:59:04.794764 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2026-02-28 00:59:04.794768 | orchestrator | Saturday 28 February 2026 00:51:29 +0000 (0:00:01.021) 0:04:21.707 ***** 2026-02-28 00:59:04.794772 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:59:04.794776 | orchestrator | 2026-02-28 00:59:04.794794 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2026-02-28 00:59:04.794798 | orchestrator | Saturday 28 February 2026 00:51:29 +0000 (0:00:00.195) 0:04:21.902 ***** 2026-02-28 00:59:04.794802 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-02-28 00:59:04.794806 | orchestrator | 2026-02-28 00:59:04.794810 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2026-02-28 00:59:04.794814 | orchestrator | Saturday 28 February 2026 00:51:31 +0000 (0:00:01.602) 0:04:23.505 ***** 2026-02-28 00:59:04.794818 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:59:04.794822 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:59:04.794825 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:59:04.794829 | orchestrator | 2026-02-28 00:59:04.794833 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2026-02-28 00:59:04.794837 | orchestrator | Saturday 28 February 2026 00:51:31 +0000 (0:00:00.400) 0:04:23.906 ***** 2026-02-28 00:59:04.794844 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:59:04.794848 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:59:04.794852 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:59:04.794856 | orchestrator | 2026-02-28 00:59:04.794859 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2026-02-28 00:59:04.794863 | orchestrator | Saturday 28 February 2026 00:51:31 +0000 (0:00:00.363) 0:04:24.269 ***** 2026-02-28 00:59:04.794867 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:59:04.794871 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:59:04.794874 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:59:04.794878 | orchestrator | 2026-02-28 00:59:04.794882 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2026-02-28 00:59:04.794886 | orchestrator | Saturday 28 February 2026 00:51:33 +0000 (0:00:01.630) 0:04:25.899 ***** 2026-02-28 00:59:04.794889 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:59:04.794893 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:59:04.794897 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:59:04.794901 | orchestrator | 2026-02-28 00:59:04.794905 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2026-02-28 00:59:04.794908 | orchestrator | Saturday 28 February 2026 00:51:34 +0000 (0:00:01.251) 0:04:27.151 ***** 2026-02-28 00:59:04.794912 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:59:04.794916 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:59:04.794920 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:59:04.794923 | orchestrator | 2026-02-28 00:59:04.794927 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2026-02-28 00:59:04.794931 | orchestrator | Saturday 28 February 2026 00:51:35 +0000 (0:00:00.919) 0:04:28.071 ***** 2026-02-28 00:59:04.794935 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:59:04.794939 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:59:04.794942 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:59:04.794946 | orchestrator | 2026-02-28 00:59:04.794950 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2026-02-28 00:59:04.794954 | orchestrator | Saturday 28 February 2026 00:51:36 +0000 (0:00:00.880) 0:04:28.952 ***** 2026-02-28 00:59:04.794957 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:59:04.794961 | orchestrator | 2026-02-28 00:59:04.794965 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2026-02-28 00:59:04.794969 | orchestrator | Saturday 28 February 2026 00:51:38 +0000 (0:00:02.273) 0:04:31.225 ***** 2026-02-28 00:59:04.794976 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:59:04.794982 | orchestrator | 2026-02-28 00:59:04.794987 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2026-02-28 00:59:04.794992 | orchestrator | Saturday 28 February 2026 00:51:39 +0000 (0:00:00.854) 0:04:32.079 ***** 2026-02-28 00:59:04.794998 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-28 00:59:04.795003 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-28 00:59:04.795009 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-28 00:59:04.795014 | orchestrator | changed: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-28 00:59:04.795020 | orchestrator | ok: [testbed-node-1] => (item=None) 2026-02-28 00:59:04.795025 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-28 00:59:04.795031 | orchestrator | changed: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-28 00:59:04.795037 | orchestrator | changed: [testbed-node-0 -> {{ item }}] 2026-02-28 00:59:04.795043 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-28 00:59:04.795048 | orchestrator | ok: [testbed-node-1 -> {{ item }}] 2026-02-28 00:59:04.795054 | orchestrator | ok: [testbed-node-2] => (item=None) 2026-02-28 00:59:04.795060 | orchestrator | ok: [testbed-node-2 -> {{ item }}] 2026-02-28 00:59:04.795066 | orchestrator | 2026-02-28 00:59:04.795072 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2026-02-28 00:59:04.795083 | orchestrator | Saturday 28 February 2026 00:51:43 +0000 (0:00:04.099) 0:04:36.179 ***** 2026-02-28 00:59:04.795089 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:59:04.795095 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:59:04.795101 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:59:04.795107 | orchestrator | 2026-02-28 00:59:04.795117 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2026-02-28 00:59:04.795123 | orchestrator | Saturday 28 February 2026 00:51:45 +0000 (0:00:01.500) 0:04:37.679 ***** 2026-02-28 00:59:04.795129 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:59:04.795135 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:59:04.795142 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:59:04.795149 | orchestrator | 2026-02-28 00:59:04.795153 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2026-02-28 00:59:04.795157 | orchestrator | Saturday 28 February 2026 00:51:45 +0000 (0:00:00.429) 0:04:38.109 ***** 2026-02-28 00:59:04.795160 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:59:04.795164 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:59:04.795168 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:59:04.795172 | orchestrator | 2026-02-28 00:59:04.795176 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2026-02-28 00:59:04.795180 | orchestrator | Saturday 28 February 2026 00:51:46 +0000 (0:00:00.790) 0:04:38.899 ***** 2026-02-28 00:59:04.795183 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:59:04.795206 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:59:04.795211 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:59:04.795215 | orchestrator | 2026-02-28 00:59:04.795219 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2026-02-28 00:59:04.795222 | orchestrator | Saturday 28 February 2026 00:51:48 +0000 (0:00:01.926) 0:04:40.826 ***** 2026-02-28 00:59:04.795226 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:59:04.795230 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:59:04.795234 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:59:04.795238 | orchestrator | 2026-02-28 00:59:04.795241 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2026-02-28 00:59:04.795245 | orchestrator | Saturday 28 February 2026 00:51:49 +0000 (0:00:01.339) 0:04:42.165 ***** 2026-02-28 00:59:04.795249 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:59:04.795253 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:59:04.795257 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:59:04.795260 | orchestrator | 2026-02-28 00:59:04.795264 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2026-02-28 00:59:04.795268 | orchestrator | Saturday 28 February 2026 00:51:50 +0000 (0:00:00.336) 0:04:42.501 ***** 2026-02-28 00:59:04.795272 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:59:04.795276 | orchestrator | 2026-02-28 00:59:04.795280 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2026-02-28 00:59:04.795284 | orchestrator | Saturday 28 February 2026 00:51:50 +0000 (0:00:00.951) 0:04:43.452 ***** 2026-02-28 00:59:04.795287 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:59:04.795291 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:59:04.795295 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:59:04.795300 | orchestrator | 2026-02-28 00:59:04.795307 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2026-02-28 00:59:04.795313 | orchestrator | Saturday 28 February 2026 00:51:51 +0000 (0:00:00.480) 0:04:43.933 ***** 2026-02-28 00:59:04.795320 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:59:04.795326 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:59:04.795333 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:59:04.795340 | orchestrator | 2026-02-28 00:59:04.795346 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2026-02-28 00:59:04.795350 | orchestrator | Saturday 28 February 2026 00:51:52 +0000 (0:00:00.806) 0:04:44.739 ***** 2026-02-28 00:59:04.795359 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:59:04.795363 | orchestrator | 2026-02-28 00:59:04.795367 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2026-02-28 00:59:04.795371 | orchestrator | Saturday 28 February 2026 00:51:53 +0000 (0:00:00.841) 0:04:45.581 ***** 2026-02-28 00:59:04.795375 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:59:04.795378 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:59:04.795382 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:59:04.795386 | orchestrator | 2026-02-28 00:59:04.795390 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2026-02-28 00:59:04.795394 | orchestrator | Saturday 28 February 2026 00:51:56 +0000 (0:00:03.337) 0:04:48.918 ***** 2026-02-28 00:59:04.795397 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:59:04.795401 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:59:04.795405 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:59:04.795409 | orchestrator | 2026-02-28 00:59:04.795413 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2026-02-28 00:59:04.795417 | orchestrator | Saturday 28 February 2026 00:51:58 +0000 (0:00:01.623) 0:04:50.541 ***** 2026-02-28 00:59:04.795420 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:59:04.795424 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:59:04.795428 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:59:04.795432 | orchestrator | 2026-02-28 00:59:04.795436 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2026-02-28 00:59:04.795440 | orchestrator | Saturday 28 February 2026 00:52:00 +0000 (0:00:02.321) 0:04:52.863 ***** 2026-02-28 00:59:04.795443 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:59:04.795447 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:59:04.795451 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:59:04.795455 | orchestrator | 2026-02-28 00:59:04.795459 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2026-02-28 00:59:04.795463 | orchestrator | Saturday 28 February 2026 00:52:02 +0000 (0:00:02.319) 0:04:55.182 ***** 2026-02-28 00:59:04.795467 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:59:04.795471 | orchestrator | 2026-02-28 00:59:04.795474 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2026-02-28 00:59:04.795478 | orchestrator | Saturday 28 February 2026 00:52:03 +0000 (0:00:00.825) 0:04:56.008 ***** 2026-02-28 00:59:04.795486 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for the monitor(s) to form the quorum... (10 retries left). 2026-02-28 00:59:04.795490 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:59:04.795494 | orchestrator | 2026-02-28 00:59:04.795497 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2026-02-28 00:59:04.795501 | orchestrator | Saturday 28 February 2026 00:52:25 +0000 (0:00:21.938) 0:05:17.946 ***** 2026-02-28 00:59:04.795505 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:59:04.795509 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:59:04.795513 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:59:04.795516 | orchestrator | 2026-02-28 00:59:04.795520 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2026-02-28 00:59:04.795524 | orchestrator | Saturday 28 February 2026 00:52:34 +0000 (0:00:09.421) 0:05:27.368 ***** 2026-02-28 00:59:04.795528 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:59:04.795532 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:59:04.795536 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:59:04.795539 | orchestrator | 2026-02-28 00:59:04.795543 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2026-02-28 00:59:04.795561 | orchestrator | Saturday 28 February 2026 00:52:35 +0000 (0:00:00.807) 0:05:28.175 ***** 2026-02-28 00:59:04.795567 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__ce061e8770e346d9a9923d60ba1b810db6728d1b'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-02-28 00:59:04.795577 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__ce061e8770e346d9a9923d60ba1b810db6728d1b'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-02-28 00:59:04.795582 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__ce061e8770e346d9a9923d60ba1b810db6728d1b'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-02-28 00:59:04.795587 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__ce061e8770e346d9a9923d60ba1b810db6728d1b'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-02-28 00:59:04.795592 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__ce061e8770e346d9a9923d60ba1b810db6728d1b'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-02-28 00:59:04.795596 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__ce061e8770e346d9a9923d60ba1b810db6728d1b'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__ce061e8770e346d9a9923d60ba1b810db6728d1b'}])  2026-02-28 00:59:04.795601 | orchestrator | 2026-02-28 00:59:04.795605 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-02-28 00:59:04.795609 | orchestrator | Saturday 28 February 2026 00:52:49 +0000 (0:00:14.321) 0:05:42.497 ***** 2026-02-28 00:59:04.795630 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:59:04.795634 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:59:04.795638 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:59:04.795642 | orchestrator | 2026-02-28 00:59:04.795646 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-02-28 00:59:04.795650 | orchestrator | Saturday 28 February 2026 00:52:50 +0000 (0:00:00.457) 0:05:42.954 ***** 2026-02-28 00:59:04.795653 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-2, testbed-node-1 2026-02-28 00:59:04.795657 | orchestrator | 2026-02-28 00:59:04.795661 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-02-28 00:59:04.795665 | orchestrator | Saturday 28 February 2026 00:52:51 +0000 (0:00:01.028) 0:05:43.983 ***** 2026-02-28 00:59:04.795669 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:59:04.795673 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:59:04.795677 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:59:04.795680 | orchestrator | 2026-02-28 00:59:04.795684 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-02-28 00:59:04.795688 | orchestrator | Saturday 28 February 2026 00:52:51 +0000 (0:00:00.398) 0:05:44.381 ***** 2026-02-28 00:59:04.795695 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:59:04.795699 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:59:04.795703 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:59:04.795710 | orchestrator | 2026-02-28 00:59:04.795714 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-02-28 00:59:04.795718 | orchestrator | Saturday 28 February 2026 00:52:52 +0000 (0:00:00.339) 0:05:44.720 ***** 2026-02-28 00:59:04.795722 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-28 00:59:04.795726 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-28 00:59:04.795729 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-28 00:59:04.795734 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:59:04.795741 | orchestrator | 2026-02-28 00:59:04.795747 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-02-28 00:59:04.795753 | orchestrator | Saturday 28 February 2026 00:52:53 +0000 (0:00:00.894) 0:05:45.614 ***** 2026-02-28 00:59:04.795759 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:59:04.795766 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:59:04.795789 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:59:04.795796 | orchestrator | 2026-02-28 00:59:04.795799 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2026-02-28 00:59:04.795803 | orchestrator | 2026-02-28 00:59:04.795807 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-28 00:59:04.795811 | orchestrator | Saturday 28 February 2026 00:52:53 +0000 (0:00:00.878) 0:05:46.493 ***** 2026-02-28 00:59:04.795815 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:59:04.795819 | orchestrator | 2026-02-28 00:59:04.795823 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-28 00:59:04.795826 | orchestrator | Saturday 28 February 2026 00:52:54 +0000 (0:00:00.638) 0:05:47.132 ***** 2026-02-28 00:59:04.795830 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:59:04.795834 | orchestrator | 2026-02-28 00:59:04.795838 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-28 00:59:04.795842 | orchestrator | Saturday 28 February 2026 00:52:55 +0000 (0:00:00.909) 0:05:48.041 ***** 2026-02-28 00:59:04.795846 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:59:04.795850 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:59:04.795853 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:59:04.795857 | orchestrator | 2026-02-28 00:59:04.795861 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-28 00:59:04.795865 | orchestrator | Saturday 28 February 2026 00:52:56 +0000 (0:00:00.989) 0:05:49.031 ***** 2026-02-28 00:59:04.795869 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:59:04.795872 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:59:04.795876 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:59:04.795880 | orchestrator | 2026-02-28 00:59:04.795884 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-28 00:59:04.795888 | orchestrator | Saturday 28 February 2026 00:52:56 +0000 (0:00:00.336) 0:05:49.367 ***** 2026-02-28 00:59:04.795891 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:59:04.795895 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:59:04.795899 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:59:04.795903 | orchestrator | 2026-02-28 00:59:04.795907 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-28 00:59:04.795911 | orchestrator | Saturday 28 February 2026 00:52:57 +0000 (0:00:00.619) 0:05:49.986 ***** 2026-02-28 00:59:04.795914 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:59:04.795918 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:59:04.795922 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:59:04.795926 | orchestrator | 2026-02-28 00:59:04.795930 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-28 00:59:04.795933 | orchestrator | Saturday 28 February 2026 00:52:57 +0000 (0:00:00.330) 0:05:50.317 ***** 2026-02-28 00:59:04.795937 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:59:04.795945 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:59:04.795949 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:59:04.795953 | orchestrator | 2026-02-28 00:59:04.795957 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-28 00:59:04.795961 | orchestrator | Saturday 28 February 2026 00:52:58 +0000 (0:00:00.711) 0:05:51.028 ***** 2026-02-28 00:59:04.795965 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:59:04.795969 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:59:04.795972 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:59:04.795976 | orchestrator | 2026-02-28 00:59:04.795980 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-28 00:59:04.795984 | orchestrator | Saturday 28 February 2026 00:52:58 +0000 (0:00:00.330) 0:05:51.359 ***** 2026-02-28 00:59:04.795988 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:59:04.795992 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:59:04.795995 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:59:04.795999 | orchestrator | 2026-02-28 00:59:04.796003 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-28 00:59:04.796007 | orchestrator | Saturday 28 February 2026 00:52:59 +0000 (0:00:00.630) 0:05:51.990 ***** 2026-02-28 00:59:04.796010 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:59:04.796014 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:59:04.796018 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:59:04.796022 | orchestrator | 2026-02-28 00:59:04.796026 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-28 00:59:04.796030 | orchestrator | Saturday 28 February 2026 00:53:00 +0000 (0:00:00.832) 0:05:52.823 ***** 2026-02-28 00:59:04.796034 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:59:04.796037 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:59:04.796041 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:59:04.796045 | orchestrator | 2026-02-28 00:59:04.796049 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-28 00:59:04.796053 | orchestrator | Saturday 28 February 2026 00:53:01 +0000 (0:00:00.858) 0:05:53.681 ***** 2026-02-28 00:59:04.796057 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:59:04.796060 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:59:04.796068 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:59:04.796071 | orchestrator | 2026-02-28 00:59:04.796075 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-28 00:59:04.796079 | orchestrator | Saturday 28 February 2026 00:53:01 +0000 (0:00:00.316) 0:05:53.998 ***** 2026-02-28 00:59:04.796083 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:59:04.796087 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:59:04.796091 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:59:04.796095 | orchestrator | 2026-02-28 00:59:04.796098 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-28 00:59:04.796102 | orchestrator | Saturday 28 February 2026 00:53:02 +0000 (0:00:00.586) 0:05:54.585 ***** 2026-02-28 00:59:04.796106 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:59:04.796110 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:59:04.796114 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:59:04.796117 | orchestrator | 2026-02-28 00:59:04.796121 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-28 00:59:04.796137 | orchestrator | Saturday 28 February 2026 00:53:02 +0000 (0:00:00.343) 0:05:54.928 ***** 2026-02-28 00:59:04.796142 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:59:04.796146 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:59:04.796149 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:59:04.796153 | orchestrator | 2026-02-28 00:59:04.796157 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-28 00:59:04.796161 | orchestrator | Saturday 28 February 2026 00:53:02 +0000 (0:00:00.357) 0:05:55.285 ***** 2026-02-28 00:59:04.796164 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:59:04.796168 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:59:04.796172 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:59:04.796179 | orchestrator | 2026-02-28 00:59:04.796183 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-28 00:59:04.796187 | orchestrator | Saturday 28 February 2026 00:53:03 +0000 (0:00:00.341) 0:05:55.626 ***** 2026-02-28 00:59:04.796191 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:59:04.796194 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:59:04.796198 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:59:04.796202 | orchestrator | 2026-02-28 00:59:04.796206 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-28 00:59:04.796210 | orchestrator | Saturday 28 February 2026 00:53:03 +0000 (0:00:00.364) 0:05:55.991 ***** 2026-02-28 00:59:04.796214 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:59:04.796217 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:59:04.796221 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:59:04.796225 | orchestrator | 2026-02-28 00:59:04.796229 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-28 00:59:04.796233 | orchestrator | Saturday 28 February 2026 00:53:04 +0000 (0:00:00.619) 0:05:56.610 ***** 2026-02-28 00:59:04.796236 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:59:04.796240 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:59:04.796244 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:59:04.796248 | orchestrator | 2026-02-28 00:59:04.796252 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-28 00:59:04.796256 | orchestrator | Saturday 28 February 2026 00:53:04 +0000 (0:00:00.353) 0:05:56.964 ***** 2026-02-28 00:59:04.796259 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:59:04.796263 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:59:04.796267 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:59:04.796271 | orchestrator | 2026-02-28 00:59:04.796275 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-28 00:59:04.796278 | orchestrator | Saturday 28 February 2026 00:53:04 +0000 (0:00:00.338) 0:05:57.302 ***** 2026-02-28 00:59:04.796282 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:59:04.796286 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:59:04.796290 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:59:04.796293 | orchestrator | 2026-02-28 00:59:04.796297 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-02-28 00:59:04.796301 | orchestrator | Saturday 28 February 2026 00:53:05 +0000 (0:00:00.943) 0:05:58.245 ***** 2026-02-28 00:59:04.796305 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-28 00:59:04.796309 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-28 00:59:04.796313 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-28 00:59:04.796317 | orchestrator | 2026-02-28 00:59:04.796320 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-02-28 00:59:04.796324 | orchestrator | Saturday 28 February 2026 00:53:06 +0000 (0:00:00.643) 0:05:58.888 ***** 2026-02-28 00:59:04.796328 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:59:04.796332 | orchestrator | 2026-02-28 00:59:04.796336 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2026-02-28 00:59:04.796339 | orchestrator | Saturday 28 February 2026 00:53:06 +0000 (0:00:00.564) 0:05:59.453 ***** 2026-02-28 00:59:04.796343 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:59:04.796347 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:59:04.796351 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:59:04.796355 | orchestrator | 2026-02-28 00:59:04.796359 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2026-02-28 00:59:04.796363 | orchestrator | Saturday 28 February 2026 00:53:07 +0000 (0:00:00.815) 0:06:00.268 ***** 2026-02-28 00:59:04.796366 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:59:04.796370 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:59:04.796374 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:59:04.796382 | orchestrator | 2026-02-28 00:59:04.796386 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2026-02-28 00:59:04.796390 | orchestrator | Saturday 28 February 2026 00:53:08 +0000 (0:00:00.855) 0:06:01.124 ***** 2026-02-28 00:59:04.796394 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-28 00:59:04.796398 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-28 00:59:04.796401 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-28 00:59:04.796405 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2026-02-28 00:59:04.796409 | orchestrator | 2026-02-28 00:59:04.796417 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2026-02-28 00:59:04.796421 | orchestrator | Saturday 28 February 2026 00:53:19 +0000 (0:00:10.485) 0:06:11.610 ***** 2026-02-28 00:59:04.796425 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:59:04.796429 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:59:04.796433 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:59:04.796436 | orchestrator | 2026-02-28 00:59:04.796440 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2026-02-28 00:59:04.796444 | orchestrator | Saturday 28 February 2026 00:53:19 +0000 (0:00:00.466) 0:06:12.076 ***** 2026-02-28 00:59:04.796448 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-02-28 00:59:04.796452 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-02-28 00:59:04.796456 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-02-28 00:59:04.796459 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-02-28 00:59:04.796463 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-28 00:59:04.796479 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-28 00:59:04.796484 | orchestrator | 2026-02-28 00:59:04.796487 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2026-02-28 00:59:04.796491 | orchestrator | Saturday 28 February 2026 00:53:21 +0000 (0:00:02.205) 0:06:14.282 ***** 2026-02-28 00:59:04.796495 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-02-28 00:59:04.796499 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-02-28 00:59:04.796503 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-02-28 00:59:04.796506 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-28 00:59:04.796510 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-02-28 00:59:04.796514 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-02-28 00:59:04.796518 | orchestrator | 2026-02-28 00:59:04.796522 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2026-02-28 00:59:04.796526 | orchestrator | Saturday 28 February 2026 00:53:23 +0000 (0:00:01.346) 0:06:15.629 ***** 2026-02-28 00:59:04.796530 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:59:04.796534 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:59:04.796537 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:59:04.796541 | orchestrator | 2026-02-28 00:59:04.796545 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2026-02-28 00:59:04.796549 | orchestrator | Saturday 28 February 2026 00:53:24 +0000 (0:00:00.912) 0:06:16.541 ***** 2026-02-28 00:59:04.796553 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:59:04.796557 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:59:04.796560 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:59:04.796564 | orchestrator | 2026-02-28 00:59:04.796568 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-02-28 00:59:04.796572 | orchestrator | Saturday 28 February 2026 00:53:24 +0000 (0:00:00.321) 0:06:16.863 ***** 2026-02-28 00:59:04.796576 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:59:04.796579 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:59:04.796583 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:59:04.796587 | orchestrator | 2026-02-28 00:59:04.796591 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-02-28 00:59:04.796595 | orchestrator | Saturday 28 February 2026 00:53:24 +0000 (0:00:00.347) 0:06:17.210 ***** 2026-02-28 00:59:04.796602 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:59:04.796606 | orchestrator | 2026-02-28 00:59:04.796610 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2026-02-28 00:59:04.796625 | orchestrator | Saturday 28 February 2026 00:53:25 +0000 (0:00:00.876) 0:06:18.087 ***** 2026-02-28 00:59:04.796629 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:59:04.796633 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:59:04.796637 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:59:04.796641 | orchestrator | 2026-02-28 00:59:04.796645 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2026-02-28 00:59:04.796649 | orchestrator | Saturday 28 February 2026 00:53:25 +0000 (0:00:00.340) 0:06:18.427 ***** 2026-02-28 00:59:04.796652 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:59:04.796656 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:59:04.796660 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:59:04.796664 | orchestrator | 2026-02-28 00:59:04.796668 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2026-02-28 00:59:04.796671 | orchestrator | Saturday 28 February 2026 00:53:26 +0000 (0:00:00.321) 0:06:18.749 ***** 2026-02-28 00:59:04.796675 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:59:04.796679 | orchestrator | 2026-02-28 00:59:04.796683 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2026-02-28 00:59:04.796687 | orchestrator | Saturday 28 February 2026 00:53:27 +0000 (0:00:00.870) 0:06:19.620 ***** 2026-02-28 00:59:04.796690 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:59:04.796694 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:59:04.796698 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:59:04.796702 | orchestrator | 2026-02-28 00:59:04.796705 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2026-02-28 00:59:04.796709 | orchestrator | Saturday 28 February 2026 00:53:28 +0000 (0:00:01.323) 0:06:20.943 ***** 2026-02-28 00:59:04.796713 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:59:04.796717 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:59:04.796721 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:59:04.796724 | orchestrator | 2026-02-28 00:59:04.796728 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2026-02-28 00:59:04.796732 | orchestrator | Saturday 28 February 2026 00:53:29 +0000 (0:00:01.212) 0:06:22.156 ***** 2026-02-28 00:59:04.796736 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:59:04.796739 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:59:04.796743 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:59:04.796747 | orchestrator | 2026-02-28 00:59:04.796751 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2026-02-28 00:59:04.796757 | orchestrator | Saturday 28 February 2026 00:53:31 +0000 (0:00:01.832) 0:06:23.988 ***** 2026-02-28 00:59:04.796761 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:59:04.796765 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:59:04.796769 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:59:04.796773 | orchestrator | 2026-02-28 00:59:04.796776 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-02-28 00:59:04.796780 | orchestrator | Saturday 28 February 2026 00:53:33 +0000 (0:00:02.362) 0:06:26.351 ***** 2026-02-28 00:59:04.796784 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:59:04.796788 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:59:04.796791 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2026-02-28 00:59:04.796795 | orchestrator | 2026-02-28 00:59:04.796799 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2026-02-28 00:59:04.796803 | orchestrator | Saturday 28 February 2026 00:53:34 +0000 (0:00:00.604) 0:06:26.956 ***** 2026-02-28 00:59:04.796818 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (30 retries left). 2026-02-28 00:59:04.796827 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (29 retries left). 2026-02-28 00:59:04.796831 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (28 retries left). 2026-02-28 00:59:04.796835 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (27 retries left). 2026-02-28 00:59:04.796839 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (26 retries left). 2026-02-28 00:59:04.796843 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-02-28 00:59:04.796847 | orchestrator | 2026-02-28 00:59:04.796851 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2026-02-28 00:59:04.796855 | orchestrator | Saturday 28 February 2026 00:54:05 +0000 (0:00:30.611) 0:06:57.567 ***** 2026-02-28 00:59:04.796858 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-02-28 00:59:04.796862 | orchestrator | 2026-02-28 00:59:04.796866 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2026-02-28 00:59:04.796870 | orchestrator | Saturday 28 February 2026 00:54:06 +0000 (0:00:01.354) 0:06:58.922 ***** 2026-02-28 00:59:04.796874 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:59:04.796877 | orchestrator | 2026-02-28 00:59:04.796881 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2026-02-28 00:59:04.796885 | orchestrator | Saturday 28 February 2026 00:54:06 +0000 (0:00:00.424) 0:06:59.346 ***** 2026-02-28 00:59:04.796889 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:59:04.796893 | orchestrator | 2026-02-28 00:59:04.796897 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2026-02-28 00:59:04.796900 | orchestrator | Saturday 28 February 2026 00:54:07 +0000 (0:00:00.179) 0:06:59.526 ***** 2026-02-28 00:59:04.796904 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2026-02-28 00:59:04.796908 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2026-02-28 00:59:04.796912 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2026-02-28 00:59:04.796915 | orchestrator | 2026-02-28 00:59:04.796919 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2026-02-28 00:59:04.796923 | orchestrator | Saturday 28 February 2026 00:54:13 +0000 (0:00:06.922) 0:07:06.448 ***** 2026-02-28 00:59:04.796927 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2026-02-28 00:59:04.796931 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2026-02-28 00:59:04.796934 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2026-02-28 00:59:04.796938 | orchestrator | skipping: [testbed-node-2] => (item=status)  2026-02-28 00:59:04.796942 | orchestrator | 2026-02-28 00:59:04.796946 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-02-28 00:59:04.796950 | orchestrator | Saturday 28 February 2026 00:54:19 +0000 (0:00:05.451) 0:07:11.900 ***** 2026-02-28 00:59:04.796954 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:59:04.796957 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:59:04.796961 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:59:04.796965 | orchestrator | 2026-02-28 00:59:04.796969 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-02-28 00:59:04.796973 | orchestrator | Saturday 28 February 2026 00:54:20 +0000 (0:00:00.786) 0:07:12.687 ***** 2026-02-28 00:59:04.796977 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:59:04.796980 | orchestrator | 2026-02-28 00:59:04.796984 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-02-28 00:59:04.796988 | orchestrator | Saturday 28 February 2026 00:54:21 +0000 (0:00:01.065) 0:07:13.752 ***** 2026-02-28 00:59:04.796994 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:59:04.796998 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:59:04.797002 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:59:04.797006 | orchestrator | 2026-02-28 00:59:04.797010 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-02-28 00:59:04.797013 | orchestrator | Saturday 28 February 2026 00:54:21 +0000 (0:00:00.399) 0:07:14.152 ***** 2026-02-28 00:59:04.797017 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:59:04.797021 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:59:04.797025 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:59:04.797029 | orchestrator | 2026-02-28 00:59:04.797032 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-02-28 00:59:04.797036 | orchestrator | Saturday 28 February 2026 00:54:23 +0000 (0:00:01.768) 0:07:15.920 ***** 2026-02-28 00:59:04.797040 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-28 00:59:04.797046 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-28 00:59:04.797050 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-28 00:59:04.797054 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:59:04.797058 | orchestrator | 2026-02-28 00:59:04.797062 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-02-28 00:59:04.797066 | orchestrator | Saturday 28 February 2026 00:54:24 +0000 (0:00:00.677) 0:07:16.598 ***** 2026-02-28 00:59:04.797070 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:59:04.797073 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:59:04.797077 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:59:04.797081 | orchestrator | 2026-02-28 00:59:04.797085 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2026-02-28 00:59:04.797089 | orchestrator | 2026-02-28 00:59:04.797092 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-28 00:59:04.797096 | orchestrator | Saturday 28 February 2026 00:54:24 +0000 (0:00:00.874) 0:07:17.472 ***** 2026-02-28 00:59:04.797111 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-28 00:59:04.797116 | orchestrator | 2026-02-28 00:59:04.797120 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-28 00:59:04.797124 | orchestrator | Saturday 28 February 2026 00:54:25 +0000 (0:00:00.618) 0:07:18.091 ***** 2026-02-28 00:59:04.797128 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-28 00:59:04.797131 | orchestrator | 2026-02-28 00:59:04.797135 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-28 00:59:04.797139 | orchestrator | Saturday 28 February 2026 00:54:26 +0000 (0:00:01.035) 0:07:19.128 ***** 2026-02-28 00:59:04.797143 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:59:04.797147 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:59:04.797150 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:59:04.797154 | orchestrator | 2026-02-28 00:59:04.797158 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-28 00:59:04.797162 | orchestrator | Saturday 28 February 2026 00:54:27 +0000 (0:00:00.422) 0:07:19.550 ***** 2026-02-28 00:59:04.797166 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:59:04.797169 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:59:04.797173 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:59:04.797177 | orchestrator | 2026-02-28 00:59:04.797181 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-28 00:59:04.797185 | orchestrator | Saturday 28 February 2026 00:54:27 +0000 (0:00:00.664) 0:07:20.215 ***** 2026-02-28 00:59:04.797189 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:59:04.797192 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:59:04.797196 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:59:04.797200 | orchestrator | 2026-02-28 00:59:04.797204 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-28 00:59:04.797208 | orchestrator | Saturday 28 February 2026 00:54:28 +0000 (0:00:00.723) 0:07:20.938 ***** 2026-02-28 00:59:04.797215 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:59:04.797219 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:59:04.797223 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:59:04.797227 | orchestrator | 2026-02-28 00:59:04.797230 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-28 00:59:04.797234 | orchestrator | Saturday 28 February 2026 00:54:29 +0000 (0:00:00.870) 0:07:21.809 ***** 2026-02-28 00:59:04.797238 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:59:04.797242 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:59:04.797246 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:59:04.797250 | orchestrator | 2026-02-28 00:59:04.797253 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-28 00:59:04.797257 | orchestrator | Saturday 28 February 2026 00:54:29 +0000 (0:00:00.303) 0:07:22.113 ***** 2026-02-28 00:59:04.797261 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:59:04.797265 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:59:04.797269 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:59:04.797273 | orchestrator | 2026-02-28 00:59:04.797276 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-28 00:59:04.797280 | orchestrator | Saturday 28 February 2026 00:54:29 +0000 (0:00:00.274) 0:07:22.387 ***** 2026-02-28 00:59:04.797284 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:59:04.797288 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:59:04.797292 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:59:04.797296 | orchestrator | 2026-02-28 00:59:04.797299 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-28 00:59:04.797303 | orchestrator | Saturday 28 February 2026 00:54:30 +0000 (0:00:00.301) 0:07:22.689 ***** 2026-02-28 00:59:04.797307 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:59:04.797311 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:59:04.797315 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:59:04.797318 | orchestrator | 2026-02-28 00:59:04.797322 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-28 00:59:04.797326 | orchestrator | Saturday 28 February 2026 00:54:31 +0000 (0:00:00.858) 0:07:23.548 ***** 2026-02-28 00:59:04.797330 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:59:04.797334 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:59:04.797337 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:59:04.797341 | orchestrator | 2026-02-28 00:59:04.797345 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-28 00:59:04.797349 | orchestrator | Saturday 28 February 2026 00:54:31 +0000 (0:00:00.649) 0:07:24.197 ***** 2026-02-28 00:59:04.797353 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:59:04.797357 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:59:04.797360 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:59:04.797364 | orchestrator | 2026-02-28 00:59:04.797368 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-28 00:59:04.797372 | orchestrator | Saturday 28 February 2026 00:54:31 +0000 (0:00:00.254) 0:07:24.452 ***** 2026-02-28 00:59:04.797376 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:59:04.797379 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:59:04.797383 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:59:04.797387 | orchestrator | 2026-02-28 00:59:04.797393 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-28 00:59:04.797397 | orchestrator | Saturday 28 February 2026 00:54:32 +0000 (0:00:00.233) 0:07:24.686 ***** 2026-02-28 00:59:04.797401 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:59:04.797405 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:59:04.797409 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:59:04.797412 | orchestrator | 2026-02-28 00:59:04.797416 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-28 00:59:04.797420 | orchestrator | Saturday 28 February 2026 00:54:32 +0000 (0:00:00.503) 0:07:25.189 ***** 2026-02-28 00:59:04.797424 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:59:04.797432 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:59:04.797436 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:59:04.797440 | orchestrator | 2026-02-28 00:59:04.797444 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-28 00:59:04.797447 | orchestrator | Saturday 28 February 2026 00:54:33 +0000 (0:00:00.313) 0:07:25.503 ***** 2026-02-28 00:59:04.797451 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:59:04.797455 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:59:04.797461 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:59:04.797465 | orchestrator | 2026-02-28 00:59:04.797469 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-28 00:59:04.797473 | orchestrator | Saturday 28 February 2026 00:54:33 +0000 (0:00:00.295) 0:07:25.798 ***** 2026-02-28 00:59:04.797477 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:59:04.797490 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:59:04.797495 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:59:04.797498 | orchestrator | 2026-02-28 00:59:04.797502 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-28 00:59:04.797506 | orchestrator | Saturday 28 February 2026 00:54:33 +0000 (0:00:00.307) 0:07:26.105 ***** 2026-02-28 00:59:04.797510 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:59:04.797514 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:59:04.797518 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:59:04.797521 | orchestrator | 2026-02-28 00:59:04.797525 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-28 00:59:04.797529 | orchestrator | Saturday 28 February 2026 00:54:34 +0000 (0:00:00.571) 0:07:26.677 ***** 2026-02-28 00:59:04.797533 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:59:04.797537 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:59:04.797541 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:59:04.797545 | orchestrator | 2026-02-28 00:59:04.797548 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-28 00:59:04.797552 | orchestrator | Saturday 28 February 2026 00:54:34 +0000 (0:00:00.491) 0:07:27.168 ***** 2026-02-28 00:59:04.797556 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:59:04.797560 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:59:04.797564 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:59:04.797568 | orchestrator | 2026-02-28 00:59:04.797572 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-28 00:59:04.797576 | orchestrator | Saturday 28 February 2026 00:54:35 +0000 (0:00:00.368) 0:07:27.536 ***** 2026-02-28 00:59:04.797579 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:59:04.797583 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:59:04.797587 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:59:04.797591 | orchestrator | 2026-02-28 00:59:04.797595 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2026-02-28 00:59:04.797599 | orchestrator | Saturday 28 February 2026 00:54:36 +0000 (0:00:00.985) 0:07:28.522 ***** 2026-02-28 00:59:04.797603 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:59:04.797606 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:59:04.797610 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:59:04.797644 | orchestrator | 2026-02-28 00:59:04.797648 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2026-02-28 00:59:04.797652 | orchestrator | Saturday 28 February 2026 00:54:36 +0000 (0:00:00.334) 0:07:28.857 ***** 2026-02-28 00:59:04.797656 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-28 00:59:04.797659 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-28 00:59:04.797663 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-28 00:59:04.797667 | orchestrator | 2026-02-28 00:59:04.797671 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2026-02-28 00:59:04.797675 | orchestrator | Saturday 28 February 2026 00:54:37 +0000 (0:00:00.822) 0:07:29.680 ***** 2026-02-28 00:59:04.797683 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-28 00:59:04.797686 | orchestrator | 2026-02-28 00:59:04.797690 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2026-02-28 00:59:04.797694 | orchestrator | Saturday 28 February 2026 00:54:37 +0000 (0:00:00.608) 0:07:30.288 ***** 2026-02-28 00:59:04.797698 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:59:04.797702 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:59:04.797705 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:59:04.797709 | orchestrator | 2026-02-28 00:59:04.797713 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2026-02-28 00:59:04.797717 | orchestrator | Saturday 28 February 2026 00:54:38 +0000 (0:00:00.653) 0:07:30.942 ***** 2026-02-28 00:59:04.797721 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:59:04.797725 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:59:04.797728 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:59:04.797732 | orchestrator | 2026-02-28 00:59:04.797736 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2026-02-28 00:59:04.797740 | orchestrator | Saturday 28 February 2026 00:54:38 +0000 (0:00:00.352) 0:07:31.294 ***** 2026-02-28 00:59:04.797744 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:59:04.797748 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:59:04.797751 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:59:04.797755 | orchestrator | 2026-02-28 00:59:04.797759 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2026-02-28 00:59:04.797763 | orchestrator | Saturday 28 February 2026 00:54:39 +0000 (0:00:00.779) 0:07:32.074 ***** 2026-02-28 00:59:04.797767 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:59:04.797773 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:59:04.797777 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:59:04.797781 | orchestrator | 2026-02-28 00:59:04.797785 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2026-02-28 00:59:04.797789 | orchestrator | Saturday 28 February 2026 00:54:39 +0000 (0:00:00.345) 0:07:32.419 ***** 2026-02-28 00:59:04.797792 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-02-28 00:59:04.797796 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-02-28 00:59:04.797800 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-02-28 00:59:04.797804 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-02-28 00:59:04.797808 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-02-28 00:59:04.797817 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-02-28 00:59:04.797822 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-02-28 00:59:04.797825 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-02-28 00:59:04.797829 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-02-28 00:59:04.797833 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-02-28 00:59:04.797837 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-02-28 00:59:04.797841 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-02-28 00:59:04.797845 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-02-28 00:59:04.797848 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-02-28 00:59:04.797852 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-02-28 00:59:04.797856 | orchestrator | 2026-02-28 00:59:04.797860 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2026-02-28 00:59:04.797867 | orchestrator | Saturday 28 February 2026 00:54:43 +0000 (0:00:03.240) 0:07:35.660 ***** 2026-02-28 00:59:04.797871 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:59:04.797875 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:59:04.797879 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:59:04.797883 | orchestrator | 2026-02-28 00:59:04.797887 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2026-02-28 00:59:04.797891 | orchestrator | Saturday 28 February 2026 00:54:43 +0000 (0:00:00.335) 0:07:35.996 ***** 2026-02-28 00:59:04.797894 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-28 00:59:04.797898 | orchestrator | 2026-02-28 00:59:04.797902 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2026-02-28 00:59:04.797906 | orchestrator | Saturday 28 February 2026 00:54:44 +0000 (0:00:00.576) 0:07:36.572 ***** 2026-02-28 00:59:04.797910 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2026-02-28 00:59:04.797914 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2026-02-28 00:59:04.797918 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2026-02-28 00:59:04.797922 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2026-02-28 00:59:04.797925 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2026-02-28 00:59:04.797929 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2026-02-28 00:59:04.797933 | orchestrator | 2026-02-28 00:59:04.797937 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2026-02-28 00:59:04.797941 | orchestrator | Saturday 28 February 2026 00:54:45 +0000 (0:00:01.473) 0:07:38.046 ***** 2026-02-28 00:59:04.797945 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-28 00:59:04.797949 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-02-28 00:59:04.797952 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-28 00:59:04.797956 | orchestrator | 2026-02-28 00:59:04.797960 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2026-02-28 00:59:04.797964 | orchestrator | Saturday 28 February 2026 00:54:47 +0000 (0:00:02.175) 0:07:40.222 ***** 2026-02-28 00:59:04.797968 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-02-28 00:59:04.797972 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-02-28 00:59:04.797976 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:59:04.797980 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-02-28 00:59:04.797983 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-02-28 00:59:04.797987 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:59:04.797991 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-02-28 00:59:04.797995 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-02-28 00:59:04.797999 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:59:04.798002 | orchestrator | 2026-02-28 00:59:04.798006 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2026-02-28 00:59:04.798010 | orchestrator | Saturday 28 February 2026 00:54:48 +0000 (0:00:01.244) 0:07:41.466 ***** 2026-02-28 00:59:04.798060 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-28 00:59:04.798064 | orchestrator | 2026-02-28 00:59:04.798068 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2026-02-28 00:59:04.798072 | orchestrator | Saturday 28 February 2026 00:54:50 +0000 (0:00:01.997) 0:07:43.463 ***** 2026-02-28 00:59:04.798076 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-28 00:59:04.798080 | orchestrator | 2026-02-28 00:59:04.798084 | orchestrator | TASK [ceph-osd : Use ceph-volume to create osds] ******************************* 2026-02-28 00:59:04.798088 | orchestrator | Saturday 28 February 2026 00:54:51 +0000 (0:00:00.678) 0:07:44.142 ***** 2026-02-28 00:59:04.798094 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-04fa4cbf-2eb3-5c27-a3dd-f7c2dcd9ac18', 'data_vg': 'ceph-04fa4cbf-2eb3-5c27-a3dd-f7c2dcd9ac18'}) 2026-02-28 00:59:04.798107 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-73c4f4bf-6139-5634-9e57-de597eca9964', 'data_vg': 'ceph-73c4f4bf-6139-5634-9e57-de597eca9964'}) 2026-02-28 00:59:04.798118 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-4d18609e-ecdb-578d-a05b-e7913934f080', 'data_vg': 'ceph-4d18609e-ecdb-578d-a05b-e7913934f080'}) 2026-02-28 00:59:04.798124 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-17f6d453-f54a-57d2-bd55-b12b469b0db8', 'data_vg': 'ceph-17f6d453-f54a-57d2-bd55-b12b469b0db8'}) 2026-02-28 00:59:04.798129 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-dcf33d59-3ae6-5017-b2aa-1b02884ceea7', 'data_vg': 'ceph-dcf33d59-3ae6-5017-b2aa-1b02884ceea7'}) 2026-02-28 00:59:04.798135 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-f45f70cf-4b1a-5b52-bc0a-6a4d28c0a539', 'data_vg': 'ceph-f45f70cf-4b1a-5b52-bc0a-6a4d28c0a539'}) 2026-02-28 00:59:04.798140 | orchestrator | 2026-02-28 00:59:04.798150 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2026-02-28 00:59:04.798183 | orchestrator | Saturday 28 February 2026 00:55:35 +0000 (0:00:43.866) 0:08:28.008 ***** 2026-02-28 00:59:04.798190 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:59:04.798197 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:59:04.798202 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:59:04.798208 | orchestrator | 2026-02-28 00:59:04.798214 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2026-02-28 00:59:04.798220 | orchestrator | Saturday 28 February 2026 00:55:35 +0000 (0:00:00.367) 0:08:28.375 ***** 2026-02-28 00:59:04.798226 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-28 00:59:04.798233 | orchestrator | 2026-02-28 00:59:04.798239 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2026-02-28 00:59:04.798245 | orchestrator | Saturday 28 February 2026 00:55:36 +0000 (0:00:00.776) 0:08:29.152 ***** 2026-02-28 00:59:04.798250 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:59:04.798256 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:59:04.798263 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:59:04.798269 | orchestrator | 2026-02-28 00:59:04.798275 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2026-02-28 00:59:04.798281 | orchestrator | Saturday 28 February 2026 00:55:37 +0000 (0:00:00.682) 0:08:29.834 ***** 2026-02-28 00:59:04.798288 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:59:04.798294 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:59:04.798300 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:59:04.798306 | orchestrator | 2026-02-28 00:59:04.798312 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2026-02-28 00:59:04.798319 | orchestrator | Saturday 28 February 2026 00:55:40 +0000 (0:00:02.850) 0:08:32.685 ***** 2026-02-28 00:59:04.798325 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-28 00:59:04.798331 | orchestrator | 2026-02-28 00:59:04.798338 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2026-02-28 00:59:04.798345 | orchestrator | Saturday 28 February 2026 00:55:41 +0000 (0:00:00.914) 0:08:33.599 ***** 2026-02-28 00:59:04.798351 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:59:04.798357 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:59:04.798363 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:59:04.798369 | orchestrator | 2026-02-28 00:59:04.798376 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2026-02-28 00:59:04.798382 | orchestrator | Saturday 28 February 2026 00:55:42 +0000 (0:00:01.200) 0:08:34.799 ***** 2026-02-28 00:59:04.798388 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:59:04.798394 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:59:04.798401 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:59:04.798412 | orchestrator | 2026-02-28 00:59:04.798416 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2026-02-28 00:59:04.798420 | orchestrator | Saturday 28 February 2026 00:55:43 +0000 (0:00:01.166) 0:08:35.966 ***** 2026-02-28 00:59:04.798424 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:59:04.798427 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:59:04.798431 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:59:04.798435 | orchestrator | 2026-02-28 00:59:04.798439 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2026-02-28 00:59:04.798443 | orchestrator | Saturday 28 February 2026 00:55:45 +0000 (0:00:02.059) 0:08:38.026 ***** 2026-02-28 00:59:04.798447 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:59:04.798451 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:59:04.798455 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:59:04.798459 | orchestrator | 2026-02-28 00:59:04.798463 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2026-02-28 00:59:04.798466 | orchestrator | Saturday 28 February 2026 00:55:45 +0000 (0:00:00.401) 0:08:38.428 ***** 2026-02-28 00:59:04.798470 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:59:04.798474 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:59:04.798478 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:59:04.798482 | orchestrator | 2026-02-28 00:59:04.798485 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2026-02-28 00:59:04.798495 | orchestrator | Saturday 28 February 2026 00:55:46 +0000 (0:00:00.641) 0:08:39.069 ***** 2026-02-28 00:59:04.798499 | orchestrator | ok: [testbed-node-3] => (item=5) 2026-02-28 00:59:04.798503 | orchestrator | ok: [testbed-node-4] => (item=4) 2026-02-28 00:59:04.798507 | orchestrator | ok: [testbed-node-5] => (item=2) 2026-02-28 00:59:04.798510 | orchestrator | ok: [testbed-node-3] => (item=1) 2026-02-28 00:59:04.798514 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-02-28 00:59:04.798518 | orchestrator | ok: [testbed-node-5] => (item=3) 2026-02-28 00:59:04.798522 | orchestrator | 2026-02-28 00:59:04.798526 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2026-02-28 00:59:04.798530 | orchestrator | Saturday 28 February 2026 00:55:47 +0000 (0:00:01.019) 0:08:40.088 ***** 2026-02-28 00:59:04.798533 | orchestrator | changed: [testbed-node-3] => (item=5) 2026-02-28 00:59:04.798537 | orchestrator | changed: [testbed-node-5] => (item=2) 2026-02-28 00:59:04.798541 | orchestrator | changed: [testbed-node-4] => (item=4) 2026-02-28 00:59:04.798545 | orchestrator | changed: [testbed-node-3] => (item=1) 2026-02-28 00:59:04.798549 | orchestrator | changed: [testbed-node-5] => (item=3) 2026-02-28 00:59:04.798557 | orchestrator | changed: [testbed-node-4] => (item=0) 2026-02-28 00:59:04.798561 | orchestrator | 2026-02-28 00:59:04.798565 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2026-02-28 00:59:04.798568 | orchestrator | Saturday 28 February 2026 00:55:49 +0000 (0:00:02.285) 0:08:42.374 ***** 2026-02-28 00:59:04.798572 | orchestrator | changed: [testbed-node-5] => (item=2) 2026-02-28 00:59:04.798576 | orchestrator | changed: [testbed-node-3] => (item=5) 2026-02-28 00:59:04.798580 | orchestrator | changed: [testbed-node-4] => (item=4) 2026-02-28 00:59:04.798584 | orchestrator | changed: [testbed-node-5] => (item=3) 2026-02-28 00:59:04.798588 | orchestrator | changed: [testbed-node-4] => (item=0) 2026-02-28 00:59:04.798591 | orchestrator | changed: [testbed-node-3] => (item=1) 2026-02-28 00:59:04.798595 | orchestrator | 2026-02-28 00:59:04.798599 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2026-02-28 00:59:04.798603 | orchestrator | Saturday 28 February 2026 00:55:54 +0000 (0:00:04.254) 0:08:46.629 ***** 2026-02-28 00:59:04.798607 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:59:04.798611 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:59:04.798650 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-02-28 00:59:04.798654 | orchestrator | 2026-02-28 00:59:04.798658 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2026-02-28 00:59:04.798665 | orchestrator | Saturday 28 February 2026 00:55:56 +0000 (0:00:02.573) 0:08:49.203 ***** 2026-02-28 00:59:04.798669 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:59:04.798673 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:59:04.798677 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2026-02-28 00:59:04.798681 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-02-28 00:59:04.798685 | orchestrator | 2026-02-28 00:59:04.798688 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2026-02-28 00:59:04.798692 | orchestrator | Saturday 28 February 2026 00:56:09 +0000 (0:00:12.659) 0:09:01.862 ***** 2026-02-28 00:59:04.798696 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:59:04.798700 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:59:04.798704 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:59:04.798707 | orchestrator | 2026-02-28 00:59:04.798711 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-02-28 00:59:04.798715 | orchestrator | Saturday 28 February 2026 00:56:10 +0000 (0:00:01.175) 0:09:03.037 ***** 2026-02-28 00:59:04.798719 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:59:04.798723 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:59:04.798727 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:59:04.798730 | orchestrator | 2026-02-28 00:59:04.798734 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-02-28 00:59:04.798738 | orchestrator | Saturday 28 February 2026 00:56:10 +0000 (0:00:00.351) 0:09:03.389 ***** 2026-02-28 00:59:04.798742 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-28 00:59:04.798746 | orchestrator | 2026-02-28 00:59:04.798750 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-02-28 00:59:04.798753 | orchestrator | Saturday 28 February 2026 00:56:11 +0000 (0:00:00.535) 0:09:03.925 ***** 2026-02-28 00:59:04.798757 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-28 00:59:04.798761 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-28 00:59:04.798765 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-28 00:59:04.798769 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:59:04.798772 | orchestrator | 2026-02-28 00:59:04.798776 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-02-28 00:59:04.798780 | orchestrator | Saturday 28 February 2026 00:56:12 +0000 (0:00:00.962) 0:09:04.888 ***** 2026-02-28 00:59:04.798784 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:59:04.798788 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:59:04.798791 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:59:04.798795 | orchestrator | 2026-02-28 00:59:04.798799 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-02-28 00:59:04.798803 | orchestrator | Saturday 28 February 2026 00:56:12 +0000 (0:00:00.339) 0:09:05.227 ***** 2026-02-28 00:59:04.798807 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:59:04.798810 | orchestrator | 2026-02-28 00:59:04.798814 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-02-28 00:59:04.798818 | orchestrator | Saturday 28 February 2026 00:56:12 +0000 (0:00:00.227) 0:09:05.455 ***** 2026-02-28 00:59:04.798822 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:59:04.798826 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:59:04.798829 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:59:04.798833 | orchestrator | 2026-02-28 00:59:04.798837 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-02-28 00:59:04.798841 | orchestrator | Saturday 28 February 2026 00:56:13 +0000 (0:00:00.380) 0:09:05.835 ***** 2026-02-28 00:59:04.798847 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:59:04.798851 | orchestrator | 2026-02-28 00:59:04.798855 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-02-28 00:59:04.798864 | orchestrator | Saturday 28 February 2026 00:56:13 +0000 (0:00:00.221) 0:09:06.057 ***** 2026-02-28 00:59:04.798868 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:59:04.798872 | orchestrator | 2026-02-28 00:59:04.798876 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-02-28 00:59:04.798879 | orchestrator | Saturday 28 February 2026 00:56:13 +0000 (0:00:00.213) 0:09:06.271 ***** 2026-02-28 00:59:04.798883 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:59:04.798887 | orchestrator | 2026-02-28 00:59:04.798891 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-02-28 00:59:04.798895 | orchestrator | Saturday 28 February 2026 00:56:13 +0000 (0:00:00.113) 0:09:06.384 ***** 2026-02-28 00:59:04.798899 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:59:04.798903 | orchestrator | 2026-02-28 00:59:04.798910 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-02-28 00:59:04.798914 | orchestrator | Saturday 28 February 2026 00:56:14 +0000 (0:00:00.240) 0:09:06.624 ***** 2026-02-28 00:59:04.798918 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:59:04.798921 | orchestrator | 2026-02-28 00:59:04.798925 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-02-28 00:59:04.798929 | orchestrator | Saturday 28 February 2026 00:56:14 +0000 (0:00:00.843) 0:09:07.468 ***** 2026-02-28 00:59:04.798933 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-28 00:59:04.798937 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-28 00:59:04.798941 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-28 00:59:04.798944 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:59:04.798948 | orchestrator | 2026-02-28 00:59:04.798952 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-02-28 00:59:04.798956 | orchestrator | Saturday 28 February 2026 00:56:15 +0000 (0:00:00.476) 0:09:07.944 ***** 2026-02-28 00:59:04.798960 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:59:04.798963 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:59:04.798967 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:59:04.798971 | orchestrator | 2026-02-28 00:59:04.798975 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-02-28 00:59:04.798978 | orchestrator | Saturday 28 February 2026 00:56:15 +0000 (0:00:00.300) 0:09:08.245 ***** 2026-02-28 00:59:04.798982 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:59:04.798986 | orchestrator | 2026-02-28 00:59:04.798990 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-02-28 00:59:04.798993 | orchestrator | Saturday 28 February 2026 00:56:15 +0000 (0:00:00.212) 0:09:08.458 ***** 2026-02-28 00:59:04.798997 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:59:04.799001 | orchestrator | 2026-02-28 00:59:04.799005 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2026-02-28 00:59:04.799009 | orchestrator | 2026-02-28 00:59:04.799012 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-28 00:59:04.799016 | orchestrator | Saturday 28 February 2026 00:56:16 +0000 (0:00:00.909) 0:09:09.367 ***** 2026-02-28 00:59:04.799023 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:59:04.799030 | orchestrator | 2026-02-28 00:59:04.799040 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-28 00:59:04.799047 | orchestrator | Saturday 28 February 2026 00:56:18 +0000 (0:00:01.204) 0:09:10.572 ***** 2026-02-28 00:59:04.799053 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:59:04.799059 | orchestrator | 2026-02-28 00:59:04.799065 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-28 00:59:04.799071 | orchestrator | Saturday 28 February 2026 00:56:19 +0000 (0:00:01.035) 0:09:11.608 ***** 2026-02-28 00:59:04.799095 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:59:04.799102 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:59:04.799108 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:59:04.799114 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:59:04.799120 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:59:04.799127 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:59:04.799133 | orchestrator | 2026-02-28 00:59:04.799139 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-28 00:59:04.799146 | orchestrator | Saturday 28 February 2026 00:56:20 +0000 (0:00:01.294) 0:09:12.903 ***** 2026-02-28 00:59:04.799153 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:59:04.799157 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:59:04.799161 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:59:04.799165 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:59:04.799168 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:59:04.799172 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:59:04.799176 | orchestrator | 2026-02-28 00:59:04.799180 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-28 00:59:04.799184 | orchestrator | Saturday 28 February 2026 00:56:21 +0000 (0:00:00.757) 0:09:13.660 ***** 2026-02-28 00:59:04.799188 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:59:04.799191 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:59:04.799195 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:59:04.799199 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:59:04.799203 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:59:04.799207 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:59:04.799211 | orchestrator | 2026-02-28 00:59:04.799214 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-28 00:59:04.799218 | orchestrator | Saturday 28 February 2026 00:56:22 +0000 (0:00:01.109) 0:09:14.770 ***** 2026-02-28 00:59:04.799222 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:59:04.799226 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:59:04.799230 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:59:04.799237 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:59:04.799241 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:59:04.799245 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:59:04.799248 | orchestrator | 2026-02-28 00:59:04.799252 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-28 00:59:04.799256 | orchestrator | Saturday 28 February 2026 00:56:23 +0000 (0:00:00.857) 0:09:15.628 ***** 2026-02-28 00:59:04.799260 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:59:04.799264 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:59:04.799267 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:59:04.799271 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:59:04.799275 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:59:04.799279 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:59:04.799283 | orchestrator | 2026-02-28 00:59:04.799286 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-28 00:59:04.799290 | orchestrator | Saturday 28 February 2026 00:56:24 +0000 (0:00:01.347) 0:09:16.975 ***** 2026-02-28 00:59:04.799294 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:59:04.799298 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:59:04.799305 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:59:04.799310 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:59:04.799314 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:59:04.799317 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:59:04.799321 | orchestrator | 2026-02-28 00:59:04.799325 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-28 00:59:04.799329 | orchestrator | Saturday 28 February 2026 00:56:25 +0000 (0:00:00.643) 0:09:17.619 ***** 2026-02-28 00:59:04.799333 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:59:04.799337 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:59:04.799340 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:59:04.799344 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:59:04.799354 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:59:04.799358 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:59:04.799361 | orchestrator | 2026-02-28 00:59:04.799365 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-28 00:59:04.799369 | orchestrator | Saturday 28 February 2026 00:56:26 +0000 (0:00:00.947) 0:09:18.567 ***** 2026-02-28 00:59:04.799373 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:59:04.799377 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:59:04.799380 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:59:04.799384 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:59:04.799388 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:59:04.799392 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:59:04.799396 | orchestrator | 2026-02-28 00:59:04.799399 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-28 00:59:04.799403 | orchestrator | Saturday 28 February 2026 00:56:27 +0000 (0:00:01.146) 0:09:19.713 ***** 2026-02-28 00:59:04.799407 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:59:04.799411 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:59:04.799414 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:59:04.799418 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:59:04.799422 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:59:04.799426 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:59:04.799430 | orchestrator | 2026-02-28 00:59:04.799433 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-28 00:59:04.799437 | orchestrator | Saturday 28 February 2026 00:56:28 +0000 (0:00:01.410) 0:09:21.124 ***** 2026-02-28 00:59:04.799441 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:59:04.799445 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:59:04.799449 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:59:04.799452 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:59:04.799456 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:59:04.799460 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:59:04.799464 | orchestrator | 2026-02-28 00:59:04.799468 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-28 00:59:04.799471 | orchestrator | Saturday 28 February 2026 00:56:29 +0000 (0:00:00.635) 0:09:21.759 ***** 2026-02-28 00:59:04.799475 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:59:04.799479 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:59:04.799483 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:59:04.799487 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:59:04.799490 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:59:04.799494 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:59:04.799498 | orchestrator | 2026-02-28 00:59:04.799502 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-28 00:59:04.799506 | orchestrator | Saturday 28 February 2026 00:56:30 +0000 (0:00:00.956) 0:09:22.715 ***** 2026-02-28 00:59:04.799510 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:59:04.799513 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:59:04.799517 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:59:04.799521 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:59:04.799525 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:59:04.799529 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:59:04.799532 | orchestrator | 2026-02-28 00:59:04.799536 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-28 00:59:04.799540 | orchestrator | Saturday 28 February 2026 00:56:30 +0000 (0:00:00.652) 0:09:23.368 ***** 2026-02-28 00:59:04.799544 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:59:04.799548 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:59:04.799552 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:59:04.799555 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:59:04.799559 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:59:04.799563 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:59:04.799567 | orchestrator | 2026-02-28 00:59:04.799571 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-28 00:59:04.799575 | orchestrator | Saturday 28 February 2026 00:56:31 +0000 (0:00:00.921) 0:09:24.290 ***** 2026-02-28 00:59:04.799584 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:59:04.799588 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:59:04.799591 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:59:04.799595 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:59:04.799599 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:59:04.799603 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:59:04.799607 | orchestrator | 2026-02-28 00:59:04.799611 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-28 00:59:04.799628 | orchestrator | Saturday 28 February 2026 00:56:32 +0000 (0:00:00.645) 0:09:24.935 ***** 2026-02-28 00:59:04.799632 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:59:04.799636 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:59:04.799642 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:59:04.799646 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:59:04.799650 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:59:04.799653 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:59:04.799657 | orchestrator | 2026-02-28 00:59:04.799661 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-28 00:59:04.799665 | orchestrator | Saturday 28 February 2026 00:56:33 +0000 (0:00:00.883) 0:09:25.818 ***** 2026-02-28 00:59:04.799669 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:59:04.799672 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:59:04.799676 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:59:04.799680 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:59:04.799684 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:59:04.799687 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:59:04.799691 | orchestrator | 2026-02-28 00:59:04.799695 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-28 00:59:04.799699 | orchestrator | Saturday 28 February 2026 00:56:33 +0000 (0:00:00.653) 0:09:26.471 ***** 2026-02-28 00:59:04.799705 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:59:04.799709 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:59:04.799713 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:59:04.799716 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:59:04.799720 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:59:04.799724 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:59:04.799728 | orchestrator | 2026-02-28 00:59:04.799731 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-28 00:59:04.799735 | orchestrator | Saturday 28 February 2026 00:56:34 +0000 (0:00:00.967) 0:09:27.439 ***** 2026-02-28 00:59:04.799739 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:59:04.799743 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:59:04.799747 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:59:04.799751 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:59:04.799754 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:59:04.799758 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:59:04.799762 | orchestrator | 2026-02-28 00:59:04.799766 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-28 00:59:04.799770 | orchestrator | Saturday 28 February 2026 00:56:35 +0000 (0:00:00.717) 0:09:28.156 ***** 2026-02-28 00:59:04.799773 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:59:04.799777 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:59:04.799781 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:59:04.799785 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:59:04.799788 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:59:04.799792 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:59:04.799796 | orchestrator | 2026-02-28 00:59:04.799800 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2026-02-28 00:59:04.799804 | orchestrator | Saturday 28 February 2026 00:56:37 +0000 (0:00:01.359) 0:09:29.516 ***** 2026-02-28 00:59:04.799807 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-28 00:59:04.799811 | orchestrator | 2026-02-28 00:59:04.799815 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2026-02-28 00:59:04.799822 | orchestrator | Saturday 28 February 2026 00:56:41 +0000 (0:00:04.172) 0:09:33.688 ***** 2026-02-28 00:59:04.799826 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-28 00:59:04.799830 | orchestrator | 2026-02-28 00:59:04.799834 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2026-02-28 00:59:04.799837 | orchestrator | Saturday 28 February 2026 00:56:43 +0000 (0:00:02.148) 0:09:35.837 ***** 2026-02-28 00:59:04.799841 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:59:04.799845 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:59:04.799849 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:59:04.799853 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:59:04.799856 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:59:04.799860 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:59:04.799864 | orchestrator | 2026-02-28 00:59:04.799868 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2026-02-28 00:59:04.799872 | orchestrator | Saturday 28 February 2026 00:56:45 +0000 (0:00:01.969) 0:09:37.806 ***** 2026-02-28 00:59:04.799875 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:59:04.799879 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:59:04.799883 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:59:04.799887 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:59:04.799890 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:59:04.799894 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:59:04.799898 | orchestrator | 2026-02-28 00:59:04.799902 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2026-02-28 00:59:04.799906 | orchestrator | Saturday 28 February 2026 00:56:46 +0000 (0:00:01.009) 0:09:38.816 ***** 2026-02-28 00:59:04.799910 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:59:04.799915 | orchestrator | 2026-02-28 00:59:04.799918 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2026-02-28 00:59:04.799922 | orchestrator | Saturday 28 February 2026 00:56:48 +0000 (0:00:01.827) 0:09:40.643 ***** 2026-02-28 00:59:04.799926 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:59:04.799930 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:59:04.799934 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:59:04.799937 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:59:04.799941 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:59:04.799945 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:59:04.799949 | orchestrator | 2026-02-28 00:59:04.799953 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2026-02-28 00:59:04.799956 | orchestrator | Saturday 28 February 2026 00:56:50 +0000 (0:00:02.365) 0:09:43.009 ***** 2026-02-28 00:59:04.799960 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:59:04.799964 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:59:04.799968 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:59:04.799972 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:59:04.799975 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:59:04.799979 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:59:04.799983 | orchestrator | 2026-02-28 00:59:04.799987 | orchestrator | RUNNING HANDLER [ceph-handler : Ceph crash handler] **************************** 2026-02-28 00:59:04.799991 | orchestrator | Saturday 28 February 2026 00:56:54 +0000 (0:00:03.622) 0:09:46.631 ***** 2026-02-28 00:59:04.799997 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:59:04.800001 | orchestrator | 2026-02-28 00:59:04.800005 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called before restart] ****** 2026-02-28 00:59:04.800008 | orchestrator | Saturday 28 February 2026 00:56:55 +0000 (0:00:01.415) 0:09:48.047 ***** 2026-02-28 00:59:04.800012 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:59:04.800016 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:59:04.800020 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:59:04.800027 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:59:04.800031 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:59:04.800035 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:59:04.800039 | orchestrator | 2026-02-28 00:59:04.800043 | orchestrator | RUNNING HANDLER [ceph-handler : Restart the ceph-crash service] **************** 2026-02-28 00:59:04.800046 | orchestrator | Saturday 28 February 2026 00:56:56 +0000 (0:00:00.930) 0:09:48.977 ***** 2026-02-28 00:59:04.800050 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:59:04.800057 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:59:04.800061 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:59:04.800065 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:59:04.800068 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:59:04.800072 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:59:04.800076 | orchestrator | 2026-02-28 00:59:04.800080 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called after restart] ******* 2026-02-28 00:59:04.800084 | orchestrator | Saturday 28 February 2026 00:56:58 +0000 (0:00:02.234) 0:09:51.212 ***** 2026-02-28 00:59:04.800088 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:59:04.800091 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:59:04.800095 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:59:04.800099 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:59:04.800103 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:59:04.800106 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:59:04.800110 | orchestrator | 2026-02-28 00:59:04.800114 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2026-02-28 00:59:04.800118 | orchestrator | 2026-02-28 00:59:04.800122 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-28 00:59:04.800126 | orchestrator | Saturday 28 February 2026 00:56:59 +0000 (0:00:01.236) 0:09:52.448 ***** 2026-02-28 00:59:04.800130 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-28 00:59:04.800134 | orchestrator | 2026-02-28 00:59:04.800137 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-28 00:59:04.800141 | orchestrator | Saturday 28 February 2026 00:57:00 +0000 (0:00:00.623) 0:09:53.071 ***** 2026-02-28 00:59:04.800145 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-28 00:59:04.800149 | orchestrator | 2026-02-28 00:59:04.800153 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-28 00:59:04.800157 | orchestrator | Saturday 28 February 2026 00:57:01 +0000 (0:00:00.860) 0:09:53.931 ***** 2026-02-28 00:59:04.800161 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:59:04.800164 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:59:04.800168 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:59:04.800172 | orchestrator | 2026-02-28 00:59:04.800176 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-28 00:59:04.800180 | orchestrator | Saturday 28 February 2026 00:57:01 +0000 (0:00:00.314) 0:09:54.246 ***** 2026-02-28 00:59:04.800184 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:59:04.800187 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:59:04.800191 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:59:04.800195 | orchestrator | 2026-02-28 00:59:04.800199 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-28 00:59:04.800203 | orchestrator | Saturday 28 February 2026 00:57:02 +0000 (0:00:00.757) 0:09:55.003 ***** 2026-02-28 00:59:04.800207 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:59:04.800210 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:59:04.800214 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:59:04.800218 | orchestrator | 2026-02-28 00:59:04.800222 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-28 00:59:04.800226 | orchestrator | Saturday 28 February 2026 00:57:03 +0000 (0:00:01.100) 0:09:56.104 ***** 2026-02-28 00:59:04.800230 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:59:04.800233 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:59:04.800240 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:59:04.800244 | orchestrator | 2026-02-28 00:59:04.800248 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-28 00:59:04.800252 | orchestrator | Saturday 28 February 2026 00:57:04 +0000 (0:00:00.644) 0:09:56.749 ***** 2026-02-28 00:59:04.800256 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:59:04.800259 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:59:04.800263 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:59:04.800267 | orchestrator | 2026-02-28 00:59:04.800271 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-28 00:59:04.800275 | orchestrator | Saturday 28 February 2026 00:57:04 +0000 (0:00:00.319) 0:09:57.069 ***** 2026-02-28 00:59:04.800278 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:59:04.800282 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:59:04.800286 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:59:04.800290 | orchestrator | 2026-02-28 00:59:04.800294 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-28 00:59:04.800297 | orchestrator | Saturday 28 February 2026 00:57:04 +0000 (0:00:00.285) 0:09:57.354 ***** 2026-02-28 00:59:04.800301 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:59:04.800305 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:59:04.800309 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:59:04.800313 | orchestrator | 2026-02-28 00:59:04.800316 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-28 00:59:04.800320 | orchestrator | Saturday 28 February 2026 00:57:05 +0000 (0:00:00.479) 0:09:57.834 ***** 2026-02-28 00:59:04.800324 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:59:04.800328 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:59:04.800332 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:59:04.800335 | orchestrator | 2026-02-28 00:59:04.800341 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-28 00:59:04.800345 | orchestrator | Saturday 28 February 2026 00:57:05 +0000 (0:00:00.634) 0:09:58.468 ***** 2026-02-28 00:59:04.800349 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:59:04.800353 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:59:04.800357 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:59:04.800361 | orchestrator | 2026-02-28 00:59:04.800364 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-28 00:59:04.800368 | orchestrator | Saturday 28 February 2026 00:57:06 +0000 (0:00:00.677) 0:09:59.146 ***** 2026-02-28 00:59:04.800372 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:59:04.800376 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:59:04.800380 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:59:04.800384 | orchestrator | 2026-02-28 00:59:04.800387 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-28 00:59:04.800391 | orchestrator | Saturday 28 February 2026 00:57:06 +0000 (0:00:00.338) 0:09:59.485 ***** 2026-02-28 00:59:04.800395 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:59:04.800401 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:59:04.800405 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:59:04.800409 | orchestrator | 2026-02-28 00:59:04.800413 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-28 00:59:04.800417 | orchestrator | Saturday 28 February 2026 00:57:07 +0000 (0:00:00.472) 0:09:59.957 ***** 2026-02-28 00:59:04.800421 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:59:04.800425 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:59:04.800428 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:59:04.800432 | orchestrator | 2026-02-28 00:59:04.800436 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-28 00:59:04.800440 | orchestrator | Saturday 28 February 2026 00:57:07 +0000 (0:00:00.321) 0:10:00.279 ***** 2026-02-28 00:59:04.800444 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:59:04.800447 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:59:04.800451 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:59:04.800458 | orchestrator | 2026-02-28 00:59:04.800462 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-28 00:59:04.800466 | orchestrator | Saturday 28 February 2026 00:57:08 +0000 (0:00:00.354) 0:10:00.633 ***** 2026-02-28 00:59:04.800470 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:59:04.800473 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:59:04.800477 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:59:04.800481 | orchestrator | 2026-02-28 00:59:04.800485 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-28 00:59:04.800489 | orchestrator | Saturday 28 February 2026 00:57:08 +0000 (0:00:00.328) 0:10:00.962 ***** 2026-02-28 00:59:04.800492 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:59:04.800496 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:59:04.800500 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:59:04.800504 | orchestrator | 2026-02-28 00:59:04.800508 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-28 00:59:04.800511 | orchestrator | Saturday 28 February 2026 00:57:09 +0000 (0:00:00.636) 0:10:01.599 ***** 2026-02-28 00:59:04.800516 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:59:04.800519 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:59:04.800523 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:59:04.800527 | orchestrator | 2026-02-28 00:59:04.800531 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-28 00:59:04.800535 | orchestrator | Saturday 28 February 2026 00:57:09 +0000 (0:00:00.375) 0:10:01.975 ***** 2026-02-28 00:59:04.800539 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:59:04.800542 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:59:04.800546 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:59:04.800550 | orchestrator | 2026-02-28 00:59:04.800554 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-28 00:59:04.800558 | orchestrator | Saturday 28 February 2026 00:57:09 +0000 (0:00:00.333) 0:10:02.308 ***** 2026-02-28 00:59:04.800562 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:59:04.800566 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:59:04.800569 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:59:04.800573 | orchestrator | 2026-02-28 00:59:04.800577 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-28 00:59:04.800581 | orchestrator | Saturday 28 February 2026 00:57:10 +0000 (0:00:00.383) 0:10:02.691 ***** 2026-02-28 00:59:04.800585 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:59:04.800589 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:59:04.800596 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:59:04.800600 | orchestrator | 2026-02-28 00:59:04.800603 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2026-02-28 00:59:04.800607 | orchestrator | Saturday 28 February 2026 00:57:11 +0000 (0:00:00.869) 0:10:03.561 ***** 2026-02-28 00:59:04.800621 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:59:04.800625 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:59:04.800629 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2026-02-28 00:59:04.800633 | orchestrator | 2026-02-28 00:59:04.800637 | orchestrator | TASK [ceph-facts : Get current default crush rule details] ********************* 2026-02-28 00:59:04.800641 | orchestrator | Saturday 28 February 2026 00:57:11 +0000 (0:00:00.443) 0:10:04.005 ***** 2026-02-28 00:59:04.800645 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-28 00:59:04.800649 | orchestrator | 2026-02-28 00:59:04.800652 | orchestrator | TASK [ceph-facts : Get current default crush rule name] ************************ 2026-02-28 00:59:04.800656 | orchestrator | Saturday 28 February 2026 00:57:13 +0000 (0:00:02.217) 0:10:06.222 ***** 2026-02-28 00:59:04.800662 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2026-02-28 00:59:04.800668 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:59:04.800676 | orchestrator | 2026-02-28 00:59:04.800680 | orchestrator | TASK [ceph-mds : Create filesystem pools] ************************************** 2026-02-28 00:59:04.800684 | orchestrator | Saturday 28 February 2026 00:57:13 +0000 (0:00:00.222) 0:10:06.445 ***** 2026-02-28 00:59:04.800692 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-02-28 00:59:04.800697 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-02-28 00:59:04.800701 | orchestrator | 2026-02-28 00:59:04.800705 | orchestrator | TASK [ceph-mds : Create ceph filesystem] *************************************** 2026-02-28 00:59:04.800709 | orchestrator | Saturday 28 February 2026 00:57:21 +0000 (0:00:07.261) 0:10:13.707 ***** 2026-02-28 00:59:04.800715 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-28 00:59:04.800719 | orchestrator | 2026-02-28 00:59:04.800723 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2026-02-28 00:59:04.800727 | orchestrator | Saturday 28 February 2026 00:57:24 +0000 (0:00:03.309) 0:10:17.017 ***** 2026-02-28 00:59:04.800730 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-28 00:59:04.800734 | orchestrator | 2026-02-28 00:59:04.800738 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2026-02-28 00:59:04.800742 | orchestrator | Saturday 28 February 2026 00:57:25 +0000 (0:00:00.559) 0:10:17.576 ***** 2026-02-28 00:59:04.800746 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2026-02-28 00:59:04.800749 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2026-02-28 00:59:04.800753 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2026-02-28 00:59:04.800757 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2026-02-28 00:59:04.800760 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2026-02-28 00:59:04.800764 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2026-02-28 00:59:04.800768 | orchestrator | 2026-02-28 00:59:04.800772 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2026-02-28 00:59:04.800775 | orchestrator | Saturday 28 February 2026 00:57:26 +0000 (0:00:01.004) 0:10:18.580 ***** 2026-02-28 00:59:04.800779 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-28 00:59:04.800783 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-02-28 00:59:04.800787 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-28 00:59:04.800791 | orchestrator | 2026-02-28 00:59:04.800795 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2026-02-28 00:59:04.800798 | orchestrator | Saturday 28 February 2026 00:57:28 +0000 (0:00:02.500) 0:10:21.081 ***** 2026-02-28 00:59:04.800802 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-02-28 00:59:04.800806 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-02-28 00:59:04.800810 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:59:04.800814 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-02-28 00:59:04.800817 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-02-28 00:59:04.800821 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:59:04.800825 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-02-28 00:59:04.800839 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-02-28 00:59:04.800843 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:59:04.800847 | orchestrator | 2026-02-28 00:59:04.800851 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2026-02-28 00:59:04.800858 | orchestrator | Saturday 28 February 2026 00:57:30 +0000 (0:00:01.534) 0:10:22.615 ***** 2026-02-28 00:59:04.800862 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:59:04.800866 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:59:04.800869 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:59:04.800873 | orchestrator | 2026-02-28 00:59:04.800877 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2026-02-28 00:59:04.800881 | orchestrator | Saturday 28 February 2026 00:57:32 +0000 (0:00:02.794) 0:10:25.410 ***** 2026-02-28 00:59:04.800884 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:59:04.800890 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:59:04.800896 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:59:04.800902 | orchestrator | 2026-02-28 00:59:04.800909 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2026-02-28 00:59:04.800915 | orchestrator | Saturday 28 February 2026 00:57:33 +0000 (0:00:00.325) 0:10:25.735 ***** 2026-02-28 00:59:04.800923 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-28 00:59:04.800931 | orchestrator | 2026-02-28 00:59:04.800937 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2026-02-28 00:59:04.800943 | orchestrator | Saturday 28 February 2026 00:57:34 +0000 (0:00:00.827) 0:10:26.563 ***** 2026-02-28 00:59:04.800949 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-28 00:59:04.800955 | orchestrator | 2026-02-28 00:59:04.800962 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2026-02-28 00:59:04.800968 | orchestrator | Saturday 28 February 2026 00:57:34 +0000 (0:00:00.602) 0:10:27.166 ***** 2026-02-28 00:59:04.800974 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:59:04.800979 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:59:04.800985 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:59:04.800991 | orchestrator | 2026-02-28 00:59:04.801000 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2026-02-28 00:59:04.801007 | orchestrator | Saturday 28 February 2026 00:57:35 +0000 (0:00:01.222) 0:10:28.388 ***** 2026-02-28 00:59:04.801012 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:59:04.801017 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:59:04.801023 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:59:04.801029 | orchestrator | 2026-02-28 00:59:04.801034 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2026-02-28 00:59:04.801040 | orchestrator | Saturday 28 February 2026 00:57:37 +0000 (0:00:01.492) 0:10:29.880 ***** 2026-02-28 00:59:04.801046 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:59:04.801052 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:59:04.801058 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:59:04.801064 | orchestrator | 2026-02-28 00:59:04.801069 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2026-02-28 00:59:04.801076 | orchestrator | Saturday 28 February 2026 00:57:40 +0000 (0:00:02.797) 0:10:32.678 ***** 2026-02-28 00:59:04.801081 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:59:04.801091 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:59:04.801098 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:59:04.801104 | orchestrator | 2026-02-28 00:59:04.801110 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2026-02-28 00:59:04.801117 | orchestrator | Saturday 28 February 2026 00:57:42 +0000 (0:00:02.072) 0:10:34.751 ***** 2026-02-28 00:59:04.801123 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:59:04.801129 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:59:04.801135 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:59:04.801141 | orchestrator | 2026-02-28 00:59:04.801147 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-02-28 00:59:04.801154 | orchestrator | Saturday 28 February 2026 00:57:43 +0000 (0:00:01.607) 0:10:36.358 ***** 2026-02-28 00:59:04.801160 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:59:04.801173 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:59:04.801179 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:59:04.801185 | orchestrator | 2026-02-28 00:59:04.801190 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-02-28 00:59:04.801196 | orchestrator | Saturday 28 February 2026 00:57:44 +0000 (0:00:00.771) 0:10:37.129 ***** 2026-02-28 00:59:04.801202 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-28 00:59:04.801208 | orchestrator | 2026-02-28 00:59:04.801214 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-02-28 00:59:04.801220 | orchestrator | Saturday 28 February 2026 00:57:45 +0000 (0:00:00.842) 0:10:37.972 ***** 2026-02-28 00:59:04.801226 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:59:04.801232 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:59:04.801237 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:59:04.801242 | orchestrator | 2026-02-28 00:59:04.801249 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-02-28 00:59:04.801255 | orchestrator | Saturday 28 February 2026 00:57:45 +0000 (0:00:00.370) 0:10:38.342 ***** 2026-02-28 00:59:04.801261 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:59:04.801267 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:59:04.801273 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:59:04.801279 | orchestrator | 2026-02-28 00:59:04.801285 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-02-28 00:59:04.801291 | orchestrator | Saturday 28 February 2026 00:57:47 +0000 (0:00:01.274) 0:10:39.616 ***** 2026-02-28 00:59:04.801297 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-28 00:59:04.801303 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-28 00:59:04.801309 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-28 00:59:04.801315 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:59:04.801321 | orchestrator | 2026-02-28 00:59:04.801327 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-02-28 00:59:04.801333 | orchestrator | Saturday 28 February 2026 00:57:48 +0000 (0:00:00.978) 0:10:40.595 ***** 2026-02-28 00:59:04.801340 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:59:04.801347 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:59:04.801353 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:59:04.801359 | orchestrator | 2026-02-28 00:59:04.801365 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2026-02-28 00:59:04.801372 | orchestrator | 2026-02-28 00:59:04.801378 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-28 00:59:04.801385 | orchestrator | Saturday 28 February 2026 00:57:49 +0000 (0:00:00.928) 0:10:41.523 ***** 2026-02-28 00:59:04.801391 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-28 00:59:04.801398 | orchestrator | 2026-02-28 00:59:04.801404 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-28 00:59:04.801409 | orchestrator | Saturday 28 February 2026 00:57:49 +0000 (0:00:00.637) 0:10:42.160 ***** 2026-02-28 00:59:04.801413 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-28 00:59:04.801417 | orchestrator | 2026-02-28 00:59:04.801421 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-28 00:59:04.801425 | orchestrator | Saturday 28 February 2026 00:57:50 +0000 (0:00:00.821) 0:10:42.982 ***** 2026-02-28 00:59:04.801428 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:59:04.801432 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:59:04.801436 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:59:04.801440 | orchestrator | 2026-02-28 00:59:04.801444 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-28 00:59:04.801448 | orchestrator | Saturday 28 February 2026 00:57:50 +0000 (0:00:00.344) 0:10:43.326 ***** 2026-02-28 00:59:04.801456 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:59:04.801460 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:59:04.801464 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:59:04.801467 | orchestrator | 2026-02-28 00:59:04.801471 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-28 00:59:04.801479 | orchestrator | Saturday 28 February 2026 00:57:51 +0000 (0:00:00.736) 0:10:44.063 ***** 2026-02-28 00:59:04.801483 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:59:04.801487 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:59:04.801490 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:59:04.801494 | orchestrator | 2026-02-28 00:59:04.801498 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-28 00:59:04.801502 | orchestrator | Saturday 28 February 2026 00:57:52 +0000 (0:00:01.006) 0:10:45.070 ***** 2026-02-28 00:59:04.801505 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:59:04.801509 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:59:04.801513 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:59:04.801517 | orchestrator | 2026-02-28 00:59:04.801521 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-28 00:59:04.801524 | orchestrator | Saturday 28 February 2026 00:57:53 +0000 (0:00:00.748) 0:10:45.818 ***** 2026-02-28 00:59:04.801528 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:59:04.801532 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:59:04.801536 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:59:04.801540 | orchestrator | 2026-02-28 00:59:04.801548 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-28 00:59:04.801552 | orchestrator | Saturday 28 February 2026 00:57:53 +0000 (0:00:00.370) 0:10:46.189 ***** 2026-02-28 00:59:04.801556 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:59:04.801560 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:59:04.801563 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:59:04.801567 | orchestrator | 2026-02-28 00:59:04.801571 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-28 00:59:04.801575 | orchestrator | Saturday 28 February 2026 00:57:54 +0000 (0:00:00.365) 0:10:46.555 ***** 2026-02-28 00:59:04.801579 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:59:04.801582 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:59:04.801586 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:59:04.801590 | orchestrator | 2026-02-28 00:59:04.801594 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-28 00:59:04.801598 | orchestrator | Saturday 28 February 2026 00:57:54 +0000 (0:00:00.661) 0:10:47.216 ***** 2026-02-28 00:59:04.801601 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:59:04.801605 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:59:04.801609 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:59:04.801623 | orchestrator | 2026-02-28 00:59:04.801627 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-28 00:59:04.801631 | orchestrator | Saturday 28 February 2026 00:57:56 +0000 (0:00:01.695) 0:10:48.912 ***** 2026-02-28 00:59:04.801635 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:59:04.801639 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:59:04.801642 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:59:04.801646 | orchestrator | 2026-02-28 00:59:04.801650 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-28 00:59:04.801654 | orchestrator | Saturday 28 February 2026 00:57:57 +0000 (0:00:00.856) 0:10:49.768 ***** 2026-02-28 00:59:04.801658 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:59:04.801662 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:59:04.801665 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:59:04.801669 | orchestrator | 2026-02-28 00:59:04.801673 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-28 00:59:04.801677 | orchestrator | Saturday 28 February 2026 00:57:57 +0000 (0:00:00.339) 0:10:50.108 ***** 2026-02-28 00:59:04.801681 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:59:04.801685 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:59:04.801692 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:59:04.801695 | orchestrator | 2026-02-28 00:59:04.801699 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-28 00:59:04.801703 | orchestrator | Saturday 28 February 2026 00:57:58 +0000 (0:00:00.610) 0:10:50.718 ***** 2026-02-28 00:59:04.801707 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:59:04.801711 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:59:04.801715 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:59:04.801721 | orchestrator | 2026-02-28 00:59:04.801727 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-28 00:59:04.801733 | orchestrator | Saturday 28 February 2026 00:57:58 +0000 (0:00:00.345) 0:10:51.063 ***** 2026-02-28 00:59:04.801739 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:59:04.801745 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:59:04.801751 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:59:04.801757 | orchestrator | 2026-02-28 00:59:04.801764 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-28 00:59:04.801770 | orchestrator | Saturday 28 February 2026 00:57:58 +0000 (0:00:00.359) 0:10:51.423 ***** 2026-02-28 00:59:04.801777 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:59:04.801783 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:59:04.801789 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:59:04.801796 | orchestrator | 2026-02-28 00:59:04.801802 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-28 00:59:04.801809 | orchestrator | Saturday 28 February 2026 00:57:59 +0000 (0:00:00.391) 0:10:51.815 ***** 2026-02-28 00:59:04.801816 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:59:04.801820 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:59:04.801824 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:59:04.801828 | orchestrator | 2026-02-28 00:59:04.801832 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-28 00:59:04.801836 | orchestrator | Saturday 28 February 2026 00:57:59 +0000 (0:00:00.601) 0:10:52.416 ***** 2026-02-28 00:59:04.801840 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:59:04.801844 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:59:04.801847 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:59:04.801851 | orchestrator | 2026-02-28 00:59:04.801855 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-28 00:59:04.801859 | orchestrator | Saturday 28 February 2026 00:58:00 +0000 (0:00:00.359) 0:10:52.775 ***** 2026-02-28 00:59:04.801863 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:59:04.801866 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:59:04.801870 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:59:04.801874 | orchestrator | 2026-02-28 00:59:04.801878 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-28 00:59:04.801882 | orchestrator | Saturday 28 February 2026 00:58:00 +0000 (0:00:00.391) 0:10:53.166 ***** 2026-02-28 00:59:04.801889 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:59:04.801893 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:59:04.801896 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:59:04.801900 | orchestrator | 2026-02-28 00:59:04.801904 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-28 00:59:04.801908 | orchestrator | Saturday 28 February 2026 00:58:01 +0000 (0:00:00.371) 0:10:53.538 ***** 2026-02-28 00:59:04.801912 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:59:04.801915 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:59:04.801919 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:59:04.801923 | orchestrator | 2026-02-28 00:59:04.801927 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2026-02-28 00:59:04.801930 | orchestrator | Saturday 28 February 2026 00:58:01 +0000 (0:00:00.879) 0:10:54.418 ***** 2026-02-28 00:59:04.801934 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-5, testbed-node-4 2026-02-28 00:59:04.801938 | orchestrator | 2026-02-28 00:59:04.801942 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-02-28 00:59:04.801954 | orchestrator | Saturday 28 February 2026 00:58:02 +0000 (0:00:00.789) 0:10:55.207 ***** 2026-02-28 00:59:04.801958 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-28 00:59:04.801962 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-02-28 00:59:04.801966 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-28 00:59:04.801969 | orchestrator | 2026-02-28 00:59:04.801973 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-02-28 00:59:04.801977 | orchestrator | Saturday 28 February 2026 00:58:05 +0000 (0:00:02.355) 0:10:57.563 ***** 2026-02-28 00:59:04.801981 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-02-28 00:59:04.801985 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-02-28 00:59:04.801989 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:59:04.801992 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-02-28 00:59:04.801996 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-02-28 00:59:04.802000 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:59:04.802004 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-02-28 00:59:04.802007 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-02-28 00:59:04.802011 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:59:04.802037 | orchestrator | 2026-02-28 00:59:04.802040 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2026-02-28 00:59:04.802044 | orchestrator | Saturday 28 February 2026 00:58:06 +0000 (0:00:01.922) 0:10:59.485 ***** 2026-02-28 00:59:04.802048 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:59:04.802052 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:59:04.802056 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:59:04.802059 | orchestrator | 2026-02-28 00:59:04.802063 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2026-02-28 00:59:04.802067 | orchestrator | Saturday 28 February 2026 00:58:07 +0000 (0:00:00.391) 0:10:59.877 ***** 2026-02-28 00:59:04.802071 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-4, testbed-node-3, testbed-node-5 2026-02-28 00:59:04.802075 | orchestrator | 2026-02-28 00:59:04.802079 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2026-02-28 00:59:04.802083 | orchestrator | Saturday 28 February 2026 00:58:08 +0000 (0:00:00.723) 0:11:00.600 ***** 2026-02-28 00:59:04.802087 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-28 00:59:04.802092 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-28 00:59:04.802095 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-28 00:59:04.802099 | orchestrator | 2026-02-28 00:59:04.802103 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2026-02-28 00:59:04.802107 | orchestrator | Saturday 28 February 2026 00:58:10 +0000 (0:00:01.974) 0:11:02.575 ***** 2026-02-28 00:59:04.802111 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-28 00:59:04.802114 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-02-28 00:59:04.802118 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-28 00:59:04.802122 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-02-28 00:59:04.802126 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-28 00:59:04.802130 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-02-28 00:59:04.802138 | orchestrator | 2026-02-28 00:59:04.802142 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-02-28 00:59:04.802145 | orchestrator | Saturday 28 February 2026 00:58:14 +0000 (0:00:04.736) 0:11:07.312 ***** 2026-02-28 00:59:04.802149 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-28 00:59:04.802153 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-28 00:59:04.802157 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-28 00:59:04.802163 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-28 00:59:04.802167 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-28 00:59:04.802171 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-28 00:59:04.802175 | orchestrator | 2026-02-28 00:59:04.802178 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-02-28 00:59:04.802182 | orchestrator | Saturday 28 February 2026 00:58:17 +0000 (0:00:02.381) 0:11:09.693 ***** 2026-02-28 00:59:04.802186 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-02-28 00:59:04.802190 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:59:04.802194 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-02-28 00:59:04.802197 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:59:04.802201 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-02-28 00:59:04.802205 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:59:04.802209 | orchestrator | 2026-02-28 00:59:04.802213 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2026-02-28 00:59:04.802219 | orchestrator | Saturday 28 February 2026 00:58:18 +0000 (0:00:01.178) 0:11:10.872 ***** 2026-02-28 00:59:04.802223 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2026-02-28 00:59:04.802227 | orchestrator | 2026-02-28 00:59:04.802231 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2026-02-28 00:59:04.802235 | orchestrator | Saturday 28 February 2026 00:58:18 +0000 (0:00:00.260) 0:11:11.132 ***** 2026-02-28 00:59:04.802239 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-28 00:59:04.802243 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-28 00:59:04.802247 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-28 00:59:04.802251 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-28 00:59:04.802255 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-28 00:59:04.802259 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:59:04.802263 | orchestrator | 2026-02-28 00:59:04.802267 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2026-02-28 00:59:04.802270 | orchestrator | Saturday 28 February 2026 00:58:19 +0000 (0:00:00.967) 0:11:12.100 ***** 2026-02-28 00:59:04.802274 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-28 00:59:04.802278 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-28 00:59:04.802282 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-28 00:59:04.802286 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-28 00:59:04.802293 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-28 00:59:04.802297 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:59:04.802301 | orchestrator | 2026-02-28 00:59:04.802305 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2026-02-28 00:59:04.802309 | orchestrator | Saturday 28 February 2026 00:58:20 +0000 (0:00:00.558) 0:11:12.658 ***** 2026-02-28 00:59:04.802313 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-28 00:59:04.802317 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-28 00:59:04.802320 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-28 00:59:04.802324 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-28 00:59:04.802328 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-28 00:59:04.802332 | orchestrator | 2026-02-28 00:59:04.802336 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2026-02-28 00:59:04.802339 | orchestrator | Saturday 28 February 2026 00:58:51 +0000 (0:00:31.038) 0:11:43.697 ***** 2026-02-28 00:59:04.802343 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:59:04.802347 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:59:04.802351 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:59:04.802355 | orchestrator | 2026-02-28 00:59:04.802359 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2026-02-28 00:59:04.802362 | orchestrator | Saturday 28 February 2026 00:58:51 +0000 (0:00:00.329) 0:11:44.026 ***** 2026-02-28 00:59:04.802369 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:59:04.802373 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:59:04.802377 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:59:04.802380 | orchestrator | 2026-02-28 00:59:04.802384 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2026-02-28 00:59:04.802388 | orchestrator | Saturday 28 February 2026 00:58:51 +0000 (0:00:00.334) 0:11:44.360 ***** 2026-02-28 00:59:04.802392 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-28 00:59:04.802396 | orchestrator | 2026-02-28 00:59:04.802400 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2026-02-28 00:59:04.802403 | orchestrator | Saturday 28 February 2026 00:58:52 +0000 (0:00:00.821) 0:11:45.182 ***** 2026-02-28 00:59:04.802407 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-28 00:59:04.802411 | orchestrator | 2026-02-28 00:59:04.802417 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2026-02-28 00:59:04.802421 | orchestrator | Saturday 28 February 2026 00:58:53 +0000 (0:00:00.583) 0:11:45.766 ***** 2026-02-28 00:59:04.802425 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:59:04.802429 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:59:04.802432 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:59:04.802436 | orchestrator | 2026-02-28 00:59:04.802440 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2026-02-28 00:59:04.802444 | orchestrator | Saturday 28 February 2026 00:58:54 +0000 (0:00:01.327) 0:11:47.093 ***** 2026-02-28 00:59:04.802448 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:59:04.802451 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:59:04.802455 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:59:04.802459 | orchestrator | 2026-02-28 00:59:04.802466 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2026-02-28 00:59:04.802470 | orchestrator | Saturday 28 February 2026 00:58:56 +0000 (0:00:01.586) 0:11:48.679 ***** 2026-02-28 00:59:04.802474 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:59:04.802478 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:59:04.802482 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:59:04.802485 | orchestrator | 2026-02-28 00:59:04.802489 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2026-02-28 00:59:04.802493 | orchestrator | Saturday 28 February 2026 00:58:58 +0000 (0:00:01.841) 0:11:50.521 ***** 2026-02-28 00:59:04.802497 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-28 00:59:04.802501 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-28 00:59:04.802505 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-28 00:59:04.802508 | orchestrator | 2026-02-28 00:59:04.802512 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-02-28 00:59:04.802516 | orchestrator | Saturday 28 February 2026 00:59:00 +0000 (0:00:02.723) 0:11:53.245 ***** 2026-02-28 00:59:04.802520 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:59:04.802524 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:59:04.802527 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:59:04.802531 | orchestrator | 2026-02-28 00:59:04.802535 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-02-28 00:59:04.802539 | orchestrator | Saturday 28 February 2026 00:59:01 +0000 (0:00:00.422) 0:11:53.668 ***** 2026-02-28 00:59:04.802543 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-28 00:59:04.802547 | orchestrator | 2026-02-28 00:59:04.802550 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-02-28 00:59:04.802554 | orchestrator | Saturday 28 February 2026 00:59:01 +0000 (0:00:00.513) 0:11:54.181 ***** 2026-02-28 00:59:04.802558 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:59:04.802562 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:59:04.802566 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:59:04.802570 | orchestrator | 2026-02-28 00:59:04.802573 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-02-28 00:59:04.802577 | orchestrator | Saturday 28 February 2026 00:59:02 +0000 (0:00:00.654) 0:11:54.836 ***** 2026-02-28 00:59:04.802581 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:59:04.802585 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:59:04.802588 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:59:04.802592 | orchestrator | 2026-02-28 00:59:04.802596 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-02-28 00:59:04.802600 | orchestrator | Saturday 28 February 2026 00:59:02 +0000 (0:00:00.380) 0:11:55.216 ***** 2026-02-28 00:59:04.802604 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-28 00:59:04.802608 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-28 00:59:04.802644 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-28 00:59:04.802649 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:59:04.802652 | orchestrator | 2026-02-28 00:59:04.802656 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-02-28 00:59:04.802660 | orchestrator | Saturday 28 February 2026 00:59:03 +0000 (0:00:00.722) 0:11:55.939 ***** 2026-02-28 00:59:04.802664 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:59:04.802668 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:59:04.802672 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:59:04.802676 | orchestrator | 2026-02-28 00:59:04.802679 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-28 00:59:04.802689 | orchestrator | testbed-node-0 : ok=134  changed=35  unreachable=0 failed=0 skipped=125  rescued=0 ignored=0 2026-02-28 00:59:04.802696 | orchestrator | testbed-node-1 : ok=127  changed=31  unreachable=0 failed=0 skipped=120  rescued=0 ignored=0 2026-02-28 00:59:04.802700 | orchestrator | testbed-node-2 : ok=134  changed=33  unreachable=0 failed=0 skipped=119  rescued=0 ignored=0 2026-02-28 00:59:04.802704 | orchestrator | testbed-node-3 : ok=193  changed=45  unreachable=0 failed=0 skipped=162  rescued=0 ignored=0 2026-02-28 00:59:04.802707 | orchestrator | testbed-node-4 : ok=175  changed=40  unreachable=0 failed=0 skipped=123  rescued=0 ignored=0 2026-02-28 00:59:04.802714 | orchestrator | testbed-node-5 : ok=177  changed=41  unreachable=0 failed=0 skipped=121  rescued=0 ignored=0 2026-02-28 00:59:04.802718 | orchestrator | 2026-02-28 00:59:04.802722 | orchestrator | 2026-02-28 00:59:04.802726 | orchestrator | 2026-02-28 00:59:04.802729 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-28 00:59:04.802733 | orchestrator | Saturday 28 February 2026 00:59:03 +0000 (0:00:00.296) 0:11:56.236 ***** 2026-02-28 00:59:04.802737 | orchestrator | =============================================================================== 2026-02-28 00:59:04.802741 | orchestrator | ceph-container-common : Pulling Ceph container image ------------------- 44.44s 2026-02-28 00:59:04.802745 | orchestrator | ceph-osd : Use ceph-volume to create osds ------------------------------ 43.87s 2026-02-28 00:59:04.802749 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 31.04s 2026-02-28 00:59:04.802752 | orchestrator | ceph-mgr : Wait for all mgr to be up ----------------------------------- 30.61s 2026-02-28 00:59:04.802756 | orchestrator | ceph-mon : Waiting for the monitor(s) to form the quorum... ------------ 21.94s 2026-02-28 00:59:04.802760 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 14.32s 2026-02-28 00:59:04.802764 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 12.66s 2026-02-28 00:59:04.802768 | orchestrator | ceph-mgr : Create ceph mgr keyring(s) on a mon node -------------------- 10.49s 2026-02-28 00:59:04.802772 | orchestrator | ceph-mon : Fetch ceph initial keys -------------------------------------- 9.42s 2026-02-28 00:59:04.802776 | orchestrator | ceph-mds : Create filesystem pools -------------------------------------- 7.26s 2026-02-28 00:59:04.802780 | orchestrator | ceph-mgr : Disable ceph mgr enabled modules ----------------------------- 6.92s 2026-02-28 00:59:04.802783 | orchestrator | ceph-config : Create ceph initial directories --------------------------- 6.78s 2026-02-28 00:59:04.802787 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 5.59s 2026-02-28 00:59:04.802791 | orchestrator | ceph-mgr : Add modules to ceph-mgr -------------------------------------- 5.45s 2026-02-28 00:59:04.802795 | orchestrator | ceph-rgw : Create rgw keyrings ------------------------------------------ 4.74s 2026-02-28 00:59:04.802799 | orchestrator | ceph-config : Generate Ceph file ---------------------------------------- 4.58s 2026-02-28 00:59:04.802802 | orchestrator | ceph-osd : Systemd start osd -------------------------------------------- 4.25s 2026-02-28 00:59:04.802806 | orchestrator | ceph-crash : Create client.crash keyring -------------------------------- 4.17s 2026-02-28 00:59:04.802810 | orchestrator | ceph-mon : Copy admin keyring over to mons ------------------------------ 4.10s 2026-02-28 00:59:04.802814 | orchestrator | ceph-crash : Start the ceph-crash service ------------------------------- 3.62s 2026-02-28 00:59:04.802818 | orchestrator | 2026-02-28 00:59:04 | INFO  | Task 6cfd6d28-095d-4fda-af77-9bb0780c82cb is in state STARTED 2026-02-28 00:59:04.802821 | orchestrator | 2026-02-28 00:59:04 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:59:07.839085 | orchestrator | 2026-02-28 00:59:07 | INFO  | Task e56ad6c2-cc17-4c48-8b43-dbaf6972bef1 is in state STARTED 2026-02-28 00:59:07.839226 | orchestrator | 2026-02-28 00:59:07 | INFO  | Task 6cfd6d28-095d-4fda-af77-9bb0780c82cb is in state STARTED 2026-02-28 00:59:07.840570 | orchestrator | 2026-02-28 00:59:07 | INFO  | Task 298724df-a6a6-4aa6-bc71-a267cc5fa183 is in state STARTED 2026-02-28 00:59:07.840708 | orchestrator | 2026-02-28 00:59:07 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:59:10.928514 | orchestrator | 2026-02-28 00:59:10 | INFO  | Task e56ad6c2-cc17-4c48-8b43-dbaf6972bef1 is in state STARTED 2026-02-28 00:59:10.929650 | orchestrator | 2026-02-28 00:59:10 | INFO  | Task 6cfd6d28-095d-4fda-af77-9bb0780c82cb is in state STARTED 2026-02-28 00:59:10.931878 | orchestrator | 2026-02-28 00:59:10 | INFO  | Task 298724df-a6a6-4aa6-bc71-a267cc5fa183 is in state STARTED 2026-02-28 00:59:10.931923 | orchestrator | 2026-02-28 00:59:10 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:59:13.994743 | orchestrator | 2026-02-28 00:59:13 | INFO  | Task e56ad6c2-cc17-4c48-8b43-dbaf6972bef1 is in state STARTED 2026-02-28 00:59:13.999508 | orchestrator | 2026-02-28 00:59:13 | INFO  | Task 6cfd6d28-095d-4fda-af77-9bb0780c82cb is in state STARTED 2026-02-28 00:59:14.007961 | orchestrator | 2026-02-28 00:59:14 | INFO  | Task 298724df-a6a6-4aa6-bc71-a267cc5fa183 is in state STARTED 2026-02-28 00:59:14.008703 | orchestrator | 2026-02-28 00:59:14 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:59:17.058261 | orchestrator | 2026-02-28 00:59:17 | INFO  | Task e56ad6c2-cc17-4c48-8b43-dbaf6972bef1 is in state STARTED 2026-02-28 00:59:17.065232 | orchestrator | 2026-02-28 00:59:17 | INFO  | Task 6cfd6d28-095d-4fda-af77-9bb0780c82cb is in state STARTED 2026-02-28 00:59:17.065304 | orchestrator | 2026-02-28 00:59:17 | INFO  | Task 298724df-a6a6-4aa6-bc71-a267cc5fa183 is in state STARTED 2026-02-28 00:59:17.065311 | orchestrator | 2026-02-28 00:59:17 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:59:20.119064 | orchestrator | 2026-02-28 00:59:20 | INFO  | Task e56ad6c2-cc17-4c48-8b43-dbaf6972bef1 is in state STARTED 2026-02-28 00:59:20.120672 | orchestrator | 2026-02-28 00:59:20 | INFO  | Task 6cfd6d28-095d-4fda-af77-9bb0780c82cb is in state STARTED 2026-02-28 00:59:20.122322 | orchestrator | 2026-02-28 00:59:20 | INFO  | Task 298724df-a6a6-4aa6-bc71-a267cc5fa183 is in state STARTED 2026-02-28 00:59:20.122384 | orchestrator | 2026-02-28 00:59:20 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:59:23.177139 | orchestrator | 2026-02-28 00:59:23 | INFO  | Task e56ad6c2-cc17-4c48-8b43-dbaf6972bef1 is in state STARTED 2026-02-28 00:59:23.179912 | orchestrator | 2026-02-28 00:59:23 | INFO  | Task 6cfd6d28-095d-4fda-af77-9bb0780c82cb is in state STARTED 2026-02-28 00:59:23.188996 | orchestrator | 2026-02-28 00:59:23 | INFO  | Task 298724df-a6a6-4aa6-bc71-a267cc5fa183 is in state STARTED 2026-02-28 00:59:23.189765 | orchestrator | 2026-02-28 00:59:23 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:59:26.240134 | orchestrator | 2026-02-28 00:59:26 | INFO  | Task e56ad6c2-cc17-4c48-8b43-dbaf6972bef1 is in state STARTED 2026-02-28 00:59:26.240385 | orchestrator | 2026-02-28 00:59:26 | INFO  | Task 6cfd6d28-095d-4fda-af77-9bb0780c82cb is in state STARTED 2026-02-28 00:59:26.241717 | orchestrator | 2026-02-28 00:59:26 | INFO  | Task 298724df-a6a6-4aa6-bc71-a267cc5fa183 is in state STARTED 2026-02-28 00:59:26.241733 | orchestrator | 2026-02-28 00:59:26 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:59:29.297475 | orchestrator | 2026-02-28 00:59:29 | INFO  | Task e56ad6c2-cc17-4c48-8b43-dbaf6972bef1 is in state STARTED 2026-02-28 00:59:29.299304 | orchestrator | 2026-02-28 00:59:29 | INFO  | Task 6cfd6d28-095d-4fda-af77-9bb0780c82cb is in state STARTED 2026-02-28 00:59:29.300847 | orchestrator | 2026-02-28 00:59:29 | INFO  | Task 298724df-a6a6-4aa6-bc71-a267cc5fa183 is in state STARTED 2026-02-28 00:59:29.300906 | orchestrator | 2026-02-28 00:59:29 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:59:32.347854 | orchestrator | 2026-02-28 00:59:32 | INFO  | Task e56ad6c2-cc17-4c48-8b43-dbaf6972bef1 is in state STARTED 2026-02-28 00:59:32.351133 | orchestrator | 2026-02-28 00:59:32 | INFO  | Task 6cfd6d28-095d-4fda-af77-9bb0780c82cb is in state STARTED 2026-02-28 00:59:32.353767 | orchestrator | 2026-02-28 00:59:32 | INFO  | Task 298724df-a6a6-4aa6-bc71-a267cc5fa183 is in state STARTED 2026-02-28 00:59:32.353872 | orchestrator | 2026-02-28 00:59:32 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:59:35.394820 | orchestrator | 2026-02-28 00:59:35 | INFO  | Task e56ad6c2-cc17-4c48-8b43-dbaf6972bef1 is in state STARTED 2026-02-28 00:59:35.396384 | orchestrator | 2026-02-28 00:59:35 | INFO  | Task 6cfd6d28-095d-4fda-af77-9bb0780c82cb is in state STARTED 2026-02-28 00:59:35.399011 | orchestrator | 2026-02-28 00:59:35 | INFO  | Task 298724df-a6a6-4aa6-bc71-a267cc5fa183 is in state STARTED 2026-02-28 00:59:35.399056 | orchestrator | 2026-02-28 00:59:35 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:59:38.445910 | orchestrator | 2026-02-28 00:59:38 | INFO  | Task e56ad6c2-cc17-4c48-8b43-dbaf6972bef1 is in state STARTED 2026-02-28 00:59:38.446416 | orchestrator | 2026-02-28 00:59:38 | INFO  | Task 6cfd6d28-095d-4fda-af77-9bb0780c82cb is in state STARTED 2026-02-28 00:59:38.448175 | orchestrator | 2026-02-28 00:59:38 | INFO  | Task 298724df-a6a6-4aa6-bc71-a267cc5fa183 is in state STARTED 2026-02-28 00:59:38.448458 | orchestrator | 2026-02-28 00:59:38 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:59:41.494860 | orchestrator | 2026-02-28 00:59:41 | INFO  | Task e56ad6c2-cc17-4c48-8b43-dbaf6972bef1 is in state STARTED 2026-02-28 00:59:41.496919 | orchestrator | 2026-02-28 00:59:41 | INFO  | Task 6cfd6d28-095d-4fda-af77-9bb0780c82cb is in state STARTED 2026-02-28 00:59:41.499533 | orchestrator | 2026-02-28 00:59:41 | INFO  | Task 298724df-a6a6-4aa6-bc71-a267cc5fa183 is in state STARTED 2026-02-28 00:59:41.499579 | orchestrator | 2026-02-28 00:59:41 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:59:44.540817 | orchestrator | 2026-02-28 00:59:44 | INFO  | Task e56ad6c2-cc17-4c48-8b43-dbaf6972bef1 is in state STARTED 2026-02-28 00:59:44.542550 | orchestrator | 2026-02-28 00:59:44 | INFO  | Task 6cfd6d28-095d-4fda-af77-9bb0780c82cb is in state STARTED 2026-02-28 00:59:44.545925 | orchestrator | 2026-02-28 00:59:44 | INFO  | Task 298724df-a6a6-4aa6-bc71-a267cc5fa183 is in state STARTED 2026-02-28 00:59:44.545978 | orchestrator | 2026-02-28 00:59:44 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:59:47.592962 | orchestrator | 2026-02-28 00:59:47 | INFO  | Task e56ad6c2-cc17-4c48-8b43-dbaf6972bef1 is in state STARTED 2026-02-28 00:59:47.594612 | orchestrator | 2026-02-28 00:59:47 | INFO  | Task 6cfd6d28-095d-4fda-af77-9bb0780c82cb is in state STARTED 2026-02-28 00:59:47.596428 | orchestrator | 2026-02-28 00:59:47 | INFO  | Task 298724df-a6a6-4aa6-bc71-a267cc5fa183 is in state STARTED 2026-02-28 00:59:47.596477 | orchestrator | 2026-02-28 00:59:47 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:59:50.643132 | orchestrator | 2026-02-28 00:59:50 | INFO  | Task e56ad6c2-cc17-4c48-8b43-dbaf6972bef1 is in state STARTED 2026-02-28 00:59:50.645073 | orchestrator | 2026-02-28 00:59:50 | INFO  | Task 6cfd6d28-095d-4fda-af77-9bb0780c82cb is in state STARTED 2026-02-28 00:59:50.647331 | orchestrator | 2026-02-28 00:59:50 | INFO  | Task 298724df-a6a6-4aa6-bc71-a267cc5fa183 is in state STARTED 2026-02-28 00:59:50.647433 | orchestrator | 2026-02-28 00:59:50 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:59:53.696191 | orchestrator | 2026-02-28 00:59:53 | INFO  | Task e56ad6c2-cc17-4c48-8b43-dbaf6972bef1 is in state STARTED 2026-02-28 00:59:53.697826 | orchestrator | 2026-02-28 00:59:53 | INFO  | Task 6cfd6d28-095d-4fda-af77-9bb0780c82cb is in state STARTED 2026-02-28 00:59:53.703367 | orchestrator | 2026-02-28 00:59:53 | INFO  | Task 298724df-a6a6-4aa6-bc71-a267cc5fa183 is in state STARTED 2026-02-28 00:59:53.703437 | orchestrator | 2026-02-28 00:59:53 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:59:56.749266 | orchestrator | 2026-02-28 00:59:56 | INFO  | Task e56ad6c2-cc17-4c48-8b43-dbaf6972bef1 is in state STARTED 2026-02-28 00:59:56.750005 | orchestrator | 2026-02-28 00:59:56 | INFO  | Task 6cfd6d28-095d-4fda-af77-9bb0780c82cb is in state STARTED 2026-02-28 00:59:56.750991 | orchestrator | 2026-02-28 00:59:56 | INFO  | Task 298724df-a6a6-4aa6-bc71-a267cc5fa183 is in state STARTED 2026-02-28 00:59:56.751048 | orchestrator | 2026-02-28 00:59:56 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:59:59.795865 | orchestrator | 2026-02-28 00:59:59 | INFO  | Task e56ad6c2-cc17-4c48-8b43-dbaf6972bef1 is in state STARTED 2026-02-28 00:59:59.796933 | orchestrator | 2026-02-28 00:59:59 | INFO  | Task 6cfd6d28-095d-4fda-af77-9bb0780c82cb is in state STARTED 2026-02-28 00:59:59.798536 | orchestrator | 2026-02-28 00:59:59 | INFO  | Task 298724df-a6a6-4aa6-bc71-a267cc5fa183 is in state STARTED 2026-02-28 00:59:59.798657 | orchestrator | 2026-02-28 00:59:59 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:00:02.848328 | orchestrator | 2026-02-28 01:00:02 | INFO  | Task e56ad6c2-cc17-4c48-8b43-dbaf6972bef1 is in state STARTED 2026-02-28 01:00:02.849468 | orchestrator | 2026-02-28 01:00:02 | INFO  | Task 6cfd6d28-095d-4fda-af77-9bb0780c82cb is in state STARTED 2026-02-28 01:00:02.852477 | orchestrator | 2026-02-28 01:00:02 | INFO  | Task 298724df-a6a6-4aa6-bc71-a267cc5fa183 is in state STARTED 2026-02-28 01:00:02.852541 | orchestrator | 2026-02-28 01:00:02 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:00:05.895185 | orchestrator | 2026-02-28 01:00:05 | INFO  | Task e56ad6c2-cc17-4c48-8b43-dbaf6972bef1 is in state STARTED 2026-02-28 01:00:05.896946 | orchestrator | 2026-02-28 01:00:05 | INFO  | Task 6cfd6d28-095d-4fda-af77-9bb0780c82cb is in state STARTED 2026-02-28 01:00:05.899414 | orchestrator | 2026-02-28 01:00:05 | INFO  | Task 298724df-a6a6-4aa6-bc71-a267cc5fa183 is in state STARTED 2026-02-28 01:00:05.899468 | orchestrator | 2026-02-28 01:00:05 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:00:08.932972 | orchestrator | 2026-02-28 01:00:08 | INFO  | Task e56ad6c2-cc17-4c48-8b43-dbaf6972bef1 is in state STARTED 2026-02-28 01:00:08.935441 | orchestrator | 2026-02-28 01:00:08 | INFO  | Task 6cfd6d28-095d-4fda-af77-9bb0780c82cb is in state STARTED 2026-02-28 01:00:08.937720 | orchestrator | 2026-02-28 01:00:08 | INFO  | Task 298724df-a6a6-4aa6-bc71-a267cc5fa183 is in state STARTED 2026-02-28 01:00:08.937759 | orchestrator | 2026-02-28 01:00:08 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:00:11.980235 | orchestrator | 2026-02-28 01:00:11 | INFO  | Task e56ad6c2-cc17-4c48-8b43-dbaf6972bef1 is in state STARTED 2026-02-28 01:00:11.981772 | orchestrator | 2026-02-28 01:00:11 | INFO  | Task 6cfd6d28-095d-4fda-af77-9bb0780c82cb is in state STARTED 2026-02-28 01:00:11.982960 | orchestrator | 2026-02-28 01:00:11 | INFO  | Task 298724df-a6a6-4aa6-bc71-a267cc5fa183 is in state STARTED 2026-02-28 01:00:11.983029 | orchestrator | 2026-02-28 01:00:11 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:00:15.026735 | orchestrator | 2026-02-28 01:00:15 | INFO  | Task e56ad6c2-cc17-4c48-8b43-dbaf6972bef1 is in state STARTED 2026-02-28 01:00:15.027134 | orchestrator | 2026-02-28 01:00:15 | INFO  | Task 6cfd6d28-095d-4fda-af77-9bb0780c82cb is in state STARTED 2026-02-28 01:00:15.029808 | orchestrator | 2026-02-28 01:00:15 | INFO  | Task 298724df-a6a6-4aa6-bc71-a267cc5fa183 is in state STARTED 2026-02-28 01:00:15.029884 | orchestrator | 2026-02-28 01:00:15 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:00:18.077095 | orchestrator | 2026-02-28 01:00:18 | INFO  | Task e56ad6c2-cc17-4c48-8b43-dbaf6972bef1 is in state STARTED 2026-02-28 01:00:18.079247 | orchestrator | 2026-02-28 01:00:18 | INFO  | Task 6cfd6d28-095d-4fda-af77-9bb0780c82cb is in state STARTED 2026-02-28 01:00:18.081517 | orchestrator | 2026-02-28 01:00:18 | INFO  | Task 298724df-a6a6-4aa6-bc71-a267cc5fa183 is in state STARTED 2026-02-28 01:00:18.081735 | orchestrator | 2026-02-28 01:00:18 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:00:21.126749 | orchestrator | 2026-02-28 01:00:21 | INFO  | Task e56ad6c2-cc17-4c48-8b43-dbaf6972bef1 is in state STARTED 2026-02-28 01:00:21.128904 | orchestrator | 2026-02-28 01:00:21 | INFO  | Task 6cfd6d28-095d-4fda-af77-9bb0780c82cb is in state STARTED 2026-02-28 01:00:21.130597 | orchestrator | 2026-02-28 01:00:21 | INFO  | Task 298724df-a6a6-4aa6-bc71-a267cc5fa183 is in state STARTED 2026-02-28 01:00:21.130747 | orchestrator | 2026-02-28 01:00:21 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:00:24.180020 | orchestrator | 2026-02-28 01:00:24 | INFO  | Task e56ad6c2-cc17-4c48-8b43-dbaf6972bef1 is in state STARTED 2026-02-28 01:00:24.181858 | orchestrator | 2026-02-28 01:00:24 | INFO  | Task 6cfd6d28-095d-4fda-af77-9bb0780c82cb is in state STARTED 2026-02-28 01:00:24.183257 | orchestrator | 2026-02-28 01:00:24 | INFO  | Task 298724df-a6a6-4aa6-bc71-a267cc5fa183 is in state STARTED 2026-02-28 01:00:24.183293 | orchestrator | 2026-02-28 01:00:24 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:00:27.225150 | orchestrator | 2026-02-28 01:00:27 | INFO  | Task e56ad6c2-cc17-4c48-8b43-dbaf6972bef1 is in state STARTED 2026-02-28 01:00:27.225261 | orchestrator | 2026-02-28 01:00:27 | INFO  | Task 6cfd6d28-095d-4fda-af77-9bb0780c82cb is in state STARTED 2026-02-28 01:00:27.226379 | orchestrator | 2026-02-28 01:00:27 | INFO  | Task 298724df-a6a6-4aa6-bc71-a267cc5fa183 is in state STARTED 2026-02-28 01:00:27.226431 | orchestrator | 2026-02-28 01:00:27 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:00:30.272746 | orchestrator | 2026-02-28 01:00:30 | INFO  | Task e56ad6c2-cc17-4c48-8b43-dbaf6972bef1 is in state STARTED 2026-02-28 01:00:30.275537 | orchestrator | 2026-02-28 01:00:30 | INFO  | Task 6cfd6d28-095d-4fda-af77-9bb0780c82cb is in state STARTED 2026-02-28 01:00:30.278059 | orchestrator | 2026-02-28 01:00:30 | INFO  | Task 298724df-a6a6-4aa6-bc71-a267cc5fa183 is in state STARTED 2026-02-28 01:00:30.278119 | orchestrator | 2026-02-28 01:00:30 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:00:33.329712 | orchestrator | 2026-02-28 01:00:33 | INFO  | Task e56ad6c2-cc17-4c48-8b43-dbaf6972bef1 is in state STARTED 2026-02-28 01:00:33.330876 | orchestrator | 2026-02-28 01:00:33 | INFO  | Task 6cfd6d28-095d-4fda-af77-9bb0780c82cb is in state STARTED 2026-02-28 01:00:33.332191 | orchestrator | 2026-02-28 01:00:33 | INFO  | Task 298724df-a6a6-4aa6-bc71-a267cc5fa183 is in state STARTED 2026-02-28 01:00:33.332298 | orchestrator | 2026-02-28 01:00:33 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:00:36.375353 | orchestrator | 2026-02-28 01:00:36 | INFO  | Task e56ad6c2-cc17-4c48-8b43-dbaf6972bef1 is in state STARTED 2026-02-28 01:00:36.377872 | orchestrator | 2026-02-28 01:00:36 | INFO  | Task 6cfd6d28-095d-4fda-af77-9bb0780c82cb is in state STARTED 2026-02-28 01:00:36.380353 | orchestrator | 2026-02-28 01:00:36 | INFO  | Task 298724df-a6a6-4aa6-bc71-a267cc5fa183 is in state STARTED 2026-02-28 01:00:36.380434 | orchestrator | 2026-02-28 01:00:36 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:00:39.440284 | orchestrator | 2026-02-28 01:00:39 | INFO  | Task e56ad6c2-cc17-4c48-8b43-dbaf6972bef1 is in state STARTED 2026-02-28 01:00:39.442536 | orchestrator | 2026-02-28 01:00:39 | INFO  | Task 6cfd6d28-095d-4fda-af77-9bb0780c82cb is in state STARTED 2026-02-28 01:00:39.443450 | orchestrator | 2026-02-28 01:00:39 | INFO  | Task 298724df-a6a6-4aa6-bc71-a267cc5fa183 is in state STARTED 2026-02-28 01:00:39.443617 | orchestrator | 2026-02-28 01:00:39 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:00:42.485421 | orchestrator | 2026-02-28 01:00:42 | INFO  | Task e56ad6c2-cc17-4c48-8b43-dbaf6972bef1 is in state STARTED 2026-02-28 01:00:42.486858 | orchestrator | 2026-02-28 01:00:42 | INFO  | Task 6cfd6d28-095d-4fda-af77-9bb0780c82cb is in state STARTED 2026-02-28 01:00:42.487874 | orchestrator | 2026-02-28 01:00:42 | INFO  | Task 298724df-a6a6-4aa6-bc71-a267cc5fa183 is in state STARTED 2026-02-28 01:00:42.487923 | orchestrator | 2026-02-28 01:00:42 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:00:45.554593 | orchestrator | 2026-02-28 01:00:45 | INFO  | Task e56ad6c2-cc17-4c48-8b43-dbaf6972bef1 is in state SUCCESS 2026-02-28 01:00:45.556336 | orchestrator | 2026-02-28 01:00:45.556404 | orchestrator | 2026-02-28 01:00:45.556416 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-28 01:00:45.556425 | orchestrator | 2026-02-28 01:00:45.556433 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-28 01:00:45.556442 | orchestrator | Saturday 28 February 2026 00:57:35 +0000 (0:00:00.279) 0:00:00.279 ***** 2026-02-28 01:00:45.556449 | orchestrator | ok: [testbed-node-0] 2026-02-28 01:00:45.556458 | orchestrator | ok: [testbed-node-1] 2026-02-28 01:00:45.556466 | orchestrator | ok: [testbed-node-2] 2026-02-28 01:00:45.556473 | orchestrator | 2026-02-28 01:00:45.556481 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-28 01:00:45.556489 | orchestrator | Saturday 28 February 2026 00:57:36 +0000 (0:00:00.337) 0:00:00.617 ***** 2026-02-28 01:00:45.556497 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2026-02-28 01:00:45.556505 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2026-02-28 01:00:45.556512 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2026-02-28 01:00:45.556520 | orchestrator | 2026-02-28 01:00:45.556527 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2026-02-28 01:00:45.556534 | orchestrator | 2026-02-28 01:00:45.556542 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-02-28 01:00:45.556549 | orchestrator | Saturday 28 February 2026 00:57:36 +0000 (0:00:00.467) 0:00:01.084 ***** 2026-02-28 01:00:45.556557 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 01:00:45.556564 | orchestrator | 2026-02-28 01:00:45.556572 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2026-02-28 01:00:45.556579 | orchestrator | Saturday 28 February 2026 00:57:37 +0000 (0:00:00.543) 0:00:01.628 ***** 2026-02-28 01:00:45.556587 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-02-28 01:00:45.556594 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-02-28 01:00:45.556648 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-02-28 01:00:45.556663 | orchestrator | 2026-02-28 01:00:45.556675 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2026-02-28 01:00:45.556682 | orchestrator | Saturday 28 February 2026 00:57:38 +0000 (0:00:00.769) 0:00:02.398 ***** 2026-02-28 01:00:45.556693 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-28 01:00:45.556723 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-28 01:00:45.556747 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-28 01:00:45.556759 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-28 01:00:45.556775 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-28 01:00:45.556788 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-28 01:00:45.556798 | orchestrator | 2026-02-28 01:00:45.556810 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-02-28 01:00:45.556822 | orchestrator | Saturday 28 February 2026 00:57:40 +0000 (0:00:01.942) 0:00:04.340 ***** 2026-02-28 01:00:45.556834 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 01:00:45.556845 | orchestrator | 2026-02-28 01:00:45.556991 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2026-02-28 01:00:45.557011 | orchestrator | Saturday 28 February 2026 00:57:40 +0000 (0:00:00.642) 0:00:04.982 ***** 2026-02-28 01:00:45.557021 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-28 01:00:45.557040 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-28 01:00:45.557103 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-28 01:00:45.557134 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-28 01:00:45.557153 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-28 01:00:45.557170 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-28 01:00:45.557180 | orchestrator | 2026-02-28 01:00:45.557189 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2026-02-28 01:00:45.557198 | orchestrator | Saturday 28 February 2026 00:57:43 +0000 (0:00:02.570) 0:00:07.553 ***** 2026-02-28 01:00:45.557215 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-02-28 01:00:45.557228 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-02-28 01:00:45.557244 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-02-28 01:00:45.557262 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:00:45.557272 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-02-28 01:00:45.557281 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:00:45.557294 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-02-28 01:00:45.557310 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-02-28 01:00:45.557320 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:00:45.557377 | orchestrator | 2026-02-28 01:00:45.557386 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2026-02-28 01:00:45.557395 | orchestrator | Saturday 28 February 2026 00:57:44 +0000 (0:00:01.719) 0:00:09.272 ***** 2026-02-28 01:00:45.557404 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-02-28 01:00:45.557413 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-02-28 01:00:45.557423 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:00:45.557436 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-02-28 01:00:45.557452 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-02-28 01:00:45.557467 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-02-28 01:00:45.557477 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:00:45.557486 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-02-28 01:00:45.557495 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:00:45.557504 | orchestrator | 2026-02-28 01:00:45.557512 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2026-02-28 01:00:45.557521 | orchestrator | Saturday 28 February 2026 00:57:46 +0000 (0:00:01.312) 0:00:10.585 ***** 2026-02-28 01:00:45.557534 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-28 01:00:45.557550 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-28 01:00:45.557565 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-28 01:00:45.557575 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-28 01:00:45.557589 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-28 01:00:45.557605 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-28 01:00:45.557645 | orchestrator | 2026-02-28 01:00:45.557656 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2026-02-28 01:00:45.557665 | orchestrator | Saturday 28 February 2026 00:57:48 +0000 (0:00:02.559) 0:00:13.144 ***** 2026-02-28 01:00:45.557674 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:00:45.557682 | orchestrator | changed: [testbed-node-1] 2026-02-28 01:00:45.557691 | orchestrator | changed: [testbed-node-2] 2026-02-28 01:00:45.557699 | orchestrator | 2026-02-28 01:00:45.557708 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2026-02-28 01:00:45.557717 | orchestrator | Saturday 28 February 2026 00:57:52 +0000 (0:00:03.680) 0:00:16.825 ***** 2026-02-28 01:00:45.557725 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:00:45.557735 | orchestrator | changed: [testbed-node-1] 2026-02-28 01:00:45.557744 | orchestrator | changed: [testbed-node-2] 2026-02-28 01:00:45.557752 | orchestrator | 2026-02-28 01:00:45.557761 | orchestrator | TASK [service-check-containers : opensearch | Check containers] **************** 2026-02-28 01:00:45.557770 | orchestrator | Saturday 28 February 2026 00:57:54 +0000 (0:00:02.122) 0:00:18.947 ***** 2026-02-28 01:00:45.557779 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-28 01:00:45.557793 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-28 01:00:45.557803 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-28 01:00:45.557823 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-28 01:00:45.557834 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-28 01:00:45.557848 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-28 01:00:45.557862 | orchestrator | 2026-02-28 01:00:45.557871 | orchestrator | TASK [service-check-containers : opensearch | Notify handlers to restart containers] *** 2026-02-28 01:00:45.557880 | orchestrator | Saturday 28 February 2026 00:57:57 +0000 (0:00:02.547) 0:00:21.495 ***** 2026-02-28 01:00:45.557889 | orchestrator | changed: [testbed-node-0] => { 2026-02-28 01:00:45.557898 | orchestrator |  "msg": "Notifying handlers" 2026-02-28 01:00:45.557906 | orchestrator | } 2026-02-28 01:00:45.557915 | orchestrator | changed: [testbed-node-1] => { 2026-02-28 01:00:45.557924 | orchestrator |  "msg": "Notifying handlers" 2026-02-28 01:00:45.557932 | orchestrator | } 2026-02-28 01:00:45.557941 | orchestrator | changed: [testbed-node-2] => { 2026-02-28 01:00:45.557949 | orchestrator |  "msg": "Notifying handlers" 2026-02-28 01:00:45.557958 | orchestrator | } 2026-02-28 01:00:45.557967 | orchestrator | 2026-02-28 01:00:45.557975 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-28 01:00:45.557989 | orchestrator | Saturday 28 February 2026 00:57:57 +0000 (0:00:00.413) 0:00:21.909 ***** 2026-02-28 01:00:45.557998 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-02-28 01:00:45.558008 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-02-28 01:00:45.558071 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:00:45.558085 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-02-28 01:00:45.558106 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-02-28 01:00:45.558115 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:00:45.558125 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-02-28 01:00:45.558134 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-02-28 01:00:45.558144 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:00:45.558153 | orchestrator | 2026-02-28 01:00:45.558162 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-02-28 01:00:45.558171 | orchestrator | Saturday 28 February 2026 00:57:58 +0000 (0:00:01.329) 0:00:23.239 ***** 2026-02-28 01:00:45.558179 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:00:45.558194 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:00:45.558202 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:00:45.558211 | orchestrator | 2026-02-28 01:00:45.558219 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-02-28 01:00:45.558228 | orchestrator | Saturday 28 February 2026 00:57:59 +0000 (0:00:00.374) 0:00:23.613 ***** 2026-02-28 01:00:45.558237 | orchestrator | 2026-02-28 01:00:45.558246 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-02-28 01:00:45.558255 | orchestrator | Saturday 28 February 2026 00:57:59 +0000 (0:00:00.101) 0:00:23.715 ***** 2026-02-28 01:00:45.558264 | orchestrator | 2026-02-28 01:00:45.558273 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-02-28 01:00:45.558286 | orchestrator | Saturday 28 February 2026 00:57:59 +0000 (0:00:00.077) 0:00:23.793 ***** 2026-02-28 01:00:45.558294 | orchestrator | 2026-02-28 01:00:45.558304 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2026-02-28 01:00:45.558312 | orchestrator | Saturday 28 February 2026 00:57:59 +0000 (0:00:00.094) 0:00:23.887 ***** 2026-02-28 01:00:45.558321 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:00:45.558329 | orchestrator | 2026-02-28 01:00:45.558338 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2026-02-28 01:00:45.558346 | orchestrator | Saturday 28 February 2026 00:57:59 +0000 (0:00:00.232) 0:00:24.120 ***** 2026-02-28 01:00:45.558355 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:00:45.558364 | orchestrator | 2026-02-28 01:00:45.558372 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2026-02-28 01:00:45.558380 | orchestrator | Saturday 28 February 2026 00:58:00 +0000 (0:00:00.434) 0:00:24.555 ***** 2026-02-28 01:00:45.558389 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:00:45.558398 | orchestrator | changed: [testbed-node-1] 2026-02-28 01:00:45.558406 | orchestrator | changed: [testbed-node-2] 2026-02-28 01:00:45.558414 | orchestrator | 2026-02-28 01:00:45.558423 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2026-02-28 01:00:45.558431 | orchestrator | Saturday 28 February 2026 00:59:07 +0000 (0:01:06.910) 0:01:31.465 ***** 2026-02-28 01:00:45.558439 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:00:45.558448 | orchestrator | changed: [testbed-node-1] 2026-02-28 01:00:45.558456 | orchestrator | changed: [testbed-node-2] 2026-02-28 01:00:45.558465 | orchestrator | 2026-02-28 01:00:45.558474 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-02-28 01:00:45.558482 | orchestrator | Saturday 28 February 2026 01:00:30 +0000 (0:01:23.099) 0:02:54.565 ***** 2026-02-28 01:00:45.558496 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 01:00:45.558505 | orchestrator | 2026-02-28 01:00:45.558514 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2026-02-28 01:00:45.558522 | orchestrator | Saturday 28 February 2026 01:00:30 +0000 (0:00:00.678) 0:02:55.243 ***** 2026-02-28 01:00:45.558531 | orchestrator | ok: [testbed-node-0] 2026-02-28 01:00:45.558541 | orchestrator | 2026-02-28 01:00:45.558549 | orchestrator | TASK [opensearch : Wait for OpenSearch cluster to become healthy] ************** 2026-02-28 01:00:45.558557 | orchestrator | Saturday 28 February 2026 01:00:33 +0000 (0:00:02.718) 0:02:57.961 ***** 2026-02-28 01:00:45.558565 | orchestrator | ok: [testbed-node-0] 2026-02-28 01:00:45.558572 | orchestrator | 2026-02-28 01:00:45.558579 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2026-02-28 01:00:45.558587 | orchestrator | Saturday 28 February 2026 01:00:36 +0000 (0:00:02.403) 0:03:00.365 ***** 2026-02-28 01:00:45.558594 | orchestrator | ok: [testbed-node-0] 2026-02-28 01:00:45.558602 | orchestrator | 2026-02-28 01:00:45.558609 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2026-02-28 01:00:45.558616 | orchestrator | Saturday 28 February 2026 01:00:38 +0000 (0:00:02.888) 0:03:03.253 ***** 2026-02-28 01:00:45.558681 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:00:45.558691 | orchestrator | 2026-02-28 01:00:45.558706 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2026-02-28 01:00:45.558713 | orchestrator | Saturday 28 February 2026 01:00:41 +0000 (0:00:02.867) 0:03:06.120 ***** 2026-02-28 01:00:45.558720 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:00:45.558728 | orchestrator | 2026-02-28 01:00:45.558737 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-28 01:00:45.558750 | orchestrator | testbed-node-0 : ok=20  changed=12  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-28 01:00:45.558768 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-28 01:00:45.558788 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-28 01:00:45.558799 | orchestrator | 2026-02-28 01:00:45.558810 | orchestrator | 2026-02-28 01:00:45.558821 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-28 01:00:45.558834 | orchestrator | Saturday 28 February 2026 01:00:44 +0000 (0:00:02.518) 0:03:08.639 ***** 2026-02-28 01:00:45.558847 | orchestrator | =============================================================================== 2026-02-28 01:00:45.558859 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 83.10s 2026-02-28 01:00:45.558870 | orchestrator | opensearch : Restart opensearch container ------------------------------ 66.91s 2026-02-28 01:00:45.558883 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 3.68s 2026-02-28 01:00:45.558896 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 2.89s 2026-02-28 01:00:45.558909 | orchestrator | opensearch : Create new log retention policy ---------------------------- 2.87s 2026-02-28 01:00:45.558917 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 2.72s 2026-02-28 01:00:45.558924 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 2.57s 2026-02-28 01:00:45.558932 | orchestrator | opensearch : Copying over config.json files for services ---------------- 2.56s 2026-02-28 01:00:45.558939 | orchestrator | service-check-containers : opensearch | Check containers ---------------- 2.55s 2026-02-28 01:00:45.558946 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 2.52s 2026-02-28 01:00:45.558953 | orchestrator | opensearch : Wait for OpenSearch cluster to become healthy -------------- 2.40s 2026-02-28 01:00:45.558961 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 2.12s 2026-02-28 01:00:45.558974 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 1.94s 2026-02-28 01:00:45.558982 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 1.72s 2026-02-28 01:00:45.558990 | orchestrator | service-check-containers : Include tasks -------------------------------- 1.33s 2026-02-28 01:00:45.558998 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 1.31s 2026-02-28 01:00:45.559005 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 0.77s 2026-02-28 01:00:45.559012 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.68s 2026-02-28 01:00:45.559020 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.64s 2026-02-28 01:00:45.559027 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.54s 2026-02-28 01:00:45.559034 | orchestrator | 2026-02-28 01:00:45 | INFO  | Task 6cfd6d28-095d-4fda-af77-9bb0780c82cb is in state STARTED 2026-02-28 01:00:45.559496 | orchestrator | 2026-02-28 01:00:45 | INFO  | Task 298724df-a6a6-4aa6-bc71-a267cc5fa183 is in state STARTED 2026-02-28 01:00:45.559734 | orchestrator | 2026-02-28 01:00:45 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:00:48.603809 | orchestrator | 2026-02-28 01:00:48 | INFO  | Task 6cfd6d28-095d-4fda-af77-9bb0780c82cb is in state STARTED 2026-02-28 01:00:48.605402 | orchestrator | 2026-02-28 01:00:48 | INFO  | Task 298724df-a6a6-4aa6-bc71-a267cc5fa183 is in state STARTED 2026-02-28 01:00:48.605480 | orchestrator | 2026-02-28 01:00:48 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:00:51.648616 | orchestrator | 2026-02-28 01:00:51 | INFO  | Task 6cfd6d28-095d-4fda-af77-9bb0780c82cb is in state STARTED 2026-02-28 01:00:51.649776 | orchestrator | 2026-02-28 01:00:51 | INFO  | Task 298724df-a6a6-4aa6-bc71-a267cc5fa183 is in state STARTED 2026-02-28 01:00:51.649957 | orchestrator | 2026-02-28 01:00:51 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:00:54.695514 | orchestrator | 2026-02-28 01:00:54 | INFO  | Task 6cfd6d28-095d-4fda-af77-9bb0780c82cb is in state STARTED 2026-02-28 01:00:54.698236 | orchestrator | 2026-02-28 01:00:54 | INFO  | Task 298724df-a6a6-4aa6-bc71-a267cc5fa183 is in state STARTED 2026-02-28 01:00:54.698311 | orchestrator | 2026-02-28 01:00:54 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:00:57.746916 | orchestrator | 2026-02-28 01:00:57 | INFO  | Task 6cfd6d28-095d-4fda-af77-9bb0780c82cb is in state STARTED 2026-02-28 01:00:57.749877 | orchestrator | 2026-02-28 01:00:57 | INFO  | Task 298724df-a6a6-4aa6-bc71-a267cc5fa183 is in state STARTED 2026-02-28 01:00:57.749947 | orchestrator | 2026-02-28 01:00:57 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:01:00.802917 | orchestrator | 2026-02-28 01:01:00 | INFO  | Task 6cfd6d28-095d-4fda-af77-9bb0780c82cb is in state STARTED 2026-02-28 01:01:00.804508 | orchestrator | 2026-02-28 01:01:00 | INFO  | Task 298724df-a6a6-4aa6-bc71-a267cc5fa183 is in state STARTED 2026-02-28 01:01:00.804597 | orchestrator | 2026-02-28 01:01:00 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:01:03.874491 | orchestrator | 2026-02-28 01:01:03 | INFO  | Task 6cfd6d28-095d-4fda-af77-9bb0780c82cb is in state STARTED 2026-02-28 01:01:03.874574 | orchestrator | 2026-02-28 01:01:03 | INFO  | Task 298724df-a6a6-4aa6-bc71-a267cc5fa183 is in state STARTED 2026-02-28 01:01:03.874581 | orchestrator | 2026-02-28 01:01:03 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:01:06.927084 | orchestrator | 2026-02-28 01:01:06 | INFO  | Task 6cfd6d28-095d-4fda-af77-9bb0780c82cb is in state STARTED 2026-02-28 01:01:06.927294 | orchestrator | 2026-02-28 01:01:06 | INFO  | Task 298724df-a6a6-4aa6-bc71-a267cc5fa183 is in state STARTED 2026-02-28 01:01:06.927308 | orchestrator | 2026-02-28 01:01:06 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:01:09.991409 | orchestrator | 2026-02-28 01:01:09 | INFO  | Task 6cfd6d28-095d-4fda-af77-9bb0780c82cb is in state STARTED 2026-02-28 01:01:09.992761 | orchestrator | 2026-02-28 01:01:09 | INFO  | Task 298724df-a6a6-4aa6-bc71-a267cc5fa183 is in state STARTED 2026-02-28 01:01:09.993035 | orchestrator | 2026-02-28 01:01:09 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:01:13.047969 | orchestrator | 2026-02-28 01:01:13 | INFO  | Task 6cfd6d28-095d-4fda-af77-9bb0780c82cb is in state SUCCESS 2026-02-28 01:01:13.048747 | orchestrator | 2026-02-28 01:01:13.048780 | orchestrator | 2026-02-28 01:01:13.048790 | orchestrator | PLAY [Set kolla_action_mariadb] ************************************************ 2026-02-28 01:01:13.048799 | orchestrator | 2026-02-28 01:01:13.048807 | orchestrator | TASK [Inform the user about the following task] ******************************** 2026-02-28 01:01:13.048816 | orchestrator | Saturday 28 February 2026 00:57:35 +0000 (0:00:00.099) 0:00:00.099 ***** 2026-02-28 01:01:13.048825 | orchestrator | ok: [localhost] => { 2026-02-28 01:01:13.048850 | orchestrator |  "msg": "The task 'Check MariaDB service' fails if the MariaDB service has not yet been deployed. This is fine." 2026-02-28 01:01:13.048894 | orchestrator | } 2026-02-28 01:01:13.048909 | orchestrator | 2026-02-28 01:01:13.048921 | orchestrator | TASK [Check MariaDB service] *************************************************** 2026-02-28 01:01:13.048932 | orchestrator | Saturday 28 February 2026 00:57:35 +0000 (0:00:00.056) 0:00:00.156 ***** 2026-02-28 01:01:13.048943 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.9:3306"} 2026-02-28 01:01:13.048952 | orchestrator | ...ignoring 2026-02-28 01:01:13.048959 | orchestrator | 2026-02-28 01:01:13.048966 | orchestrator | TASK [Set kolla_action_mariadb = upgrade if MariaDB is already running] ******** 2026-02-28 01:01:13.048973 | orchestrator | Saturday 28 February 2026 00:57:39 +0000 (0:00:03.011) 0:00:03.168 ***** 2026-02-28 01:01:13.048980 | orchestrator | skipping: [localhost] 2026-02-28 01:01:13.048987 | orchestrator | 2026-02-28 01:01:13.049084 | orchestrator | TASK [Set kolla_action_mariadb = kolla_action_ng] ****************************** 2026-02-28 01:01:13.049126 | orchestrator | Saturday 28 February 2026 00:57:39 +0000 (0:00:00.073) 0:00:03.241 ***** 2026-02-28 01:01:13.049133 | orchestrator | ok: [localhost] 2026-02-28 01:01:13.049241 | orchestrator | 2026-02-28 01:01:13.049253 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-28 01:01:13.049264 | orchestrator | 2026-02-28 01:01:13.049275 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-28 01:01:13.049285 | orchestrator | Saturday 28 February 2026 00:57:39 +0000 (0:00:00.168) 0:00:03.409 ***** 2026-02-28 01:01:13.049294 | orchestrator | ok: [testbed-node-0] 2026-02-28 01:01:13.049304 | orchestrator | ok: [testbed-node-1] 2026-02-28 01:01:13.049314 | orchestrator | ok: [testbed-node-2] 2026-02-28 01:01:13.049325 | orchestrator | 2026-02-28 01:01:13.049335 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-28 01:01:13.049346 | orchestrator | Saturday 28 February 2026 00:57:39 +0000 (0:00:00.359) 0:00:03.769 ***** 2026-02-28 01:01:13.049356 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-02-28 01:01:13.049902 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-02-28 01:01:13.049929 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-02-28 01:01:13.049941 | orchestrator | 2026-02-28 01:01:13.049954 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-02-28 01:01:13.049964 | orchestrator | 2026-02-28 01:01:13.049975 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-02-28 01:01:13.049987 | orchestrator | Saturday 28 February 2026 00:57:40 +0000 (0:00:00.608) 0:00:04.377 ***** 2026-02-28 01:01:13.049999 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-28 01:01:13.050011 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-02-28 01:01:13.050079 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-02-28 01:01:13.050093 | orchestrator | 2026-02-28 01:01:13.050104 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-02-28 01:01:13.050116 | orchestrator | Saturday 28 February 2026 00:57:40 +0000 (0:00:00.459) 0:00:04.837 ***** 2026-02-28 01:01:13.050129 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 01:01:13.050142 | orchestrator | 2026-02-28 01:01:13.050155 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2026-02-28 01:01:13.050167 | orchestrator | Saturday 28 February 2026 00:57:41 +0000 (0:00:00.613) 0:00:05.450 ***** 2026-02-28 01:01:13.050241 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-28 01:01:13.050276 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-28 01:01:13.050291 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-28 01:01:13.050311 | orchestrator | 2026-02-28 01:01:13.050348 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2026-02-28 01:01:13.050361 | orchestrator | Saturday 28 February 2026 00:57:44 +0000 (0:00:03.545) 0:00:08.996 ***** 2026-02-28 01:01:13.050371 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:01:13.050382 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:01:13.050394 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:01:13.050405 | orchestrator | 2026-02-28 01:01:13.050417 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2026-02-28 01:01:13.050435 | orchestrator | Saturday 28 February 2026 00:57:45 +0000 (0:00:01.029) 0:00:10.026 ***** 2026-02-28 01:01:13.050446 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:01:13.050457 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:01:13.050469 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:01:13.050481 | orchestrator | 2026-02-28 01:01:13.050492 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2026-02-28 01:01:13.050503 | orchestrator | Saturday 28 February 2026 00:57:47 +0000 (0:00:01.648) 0:00:11.674 ***** 2026-02-28 01:01:13.050517 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-28 01:01:13.050539 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-28 01:01:13.050567 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-28 01:01:13.050580 | orchestrator | 2026-02-28 01:01:13.050592 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2026-02-28 01:01:13.050604 | orchestrator | Saturday 28 February 2026 00:57:52 +0000 (0:00:04.964) 0:00:16.638 ***** 2026-02-28 01:01:13.050617 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:01:13.050648 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:01:13.050660 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:01:13.050672 | orchestrator | 2026-02-28 01:01:13.050684 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2026-02-28 01:01:13.050696 | orchestrator | Saturday 28 February 2026 00:57:53 +0000 (0:00:01.259) 0:00:17.898 ***** 2026-02-28 01:01:13.050716 | orchestrator | changed: [testbed-node-1] 2026-02-28 01:01:13.050728 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:01:13.050740 | orchestrator | changed: [testbed-node-2] 2026-02-28 01:01:13.050751 | orchestrator | 2026-02-28 01:01:13.050763 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-02-28 01:01:13.050776 | orchestrator | Saturday 28 February 2026 00:57:58 +0000 (0:00:04.798) 0:00:22.696 ***** 2026-02-28 01:01:13.050787 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 01:01:13.050798 | orchestrator | 2026-02-28 01:01:13.050810 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-02-28 01:01:13.050822 | orchestrator | Saturday 28 February 2026 00:57:59 +0000 (0:00:00.589) 0:00:23.286 ***** 2026-02-28 01:01:13.050852 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-28 01:01:13.050866 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:01:13.050879 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-28 01:01:13.050903 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:01:13.050924 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-28 01:01:13.050936 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:01:13.050947 | orchestrator | 2026-02-28 01:01:13.050964 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-02-28 01:01:13.050975 | orchestrator | Saturday 28 February 2026 00:58:03 +0000 (0:00:04.328) 0:00:27.615 ***** 2026-02-28 01:01:13.050987 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-28 01:01:13.051007 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:01:13.051173 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-28 01:01:13.051193 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:01:13.051212 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-28 01:01:13.051232 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:01:13.051239 | orchestrator | 2026-02-28 01:01:13.051246 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-02-28 01:01:13.051252 | orchestrator | Saturday 28 February 2026 00:58:06 +0000 (0:00:03.346) 0:00:30.961 ***** 2026-02-28 01:01:13.051260 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-28 01:01:13.051267 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:01:13.051284 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-28 01:01:13.051297 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:01:13.051305 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-28 01:01:13.051312 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:01:13.051319 | orchestrator | 2026-02-28 01:01:13.051326 | orchestrator | TASK [service-check-containers : mariadb | Check containers] ******************* 2026-02-28 01:01:13.051332 | orchestrator | Saturday 28 February 2026 00:58:10 +0000 (0:00:03.966) 0:00:34.927 ***** 2026-02-28 01:01:13.051352 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-28 01:01:13.051365 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-28 01:01:13.051382 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-28 01:01:13.051391 | orchestrator | 2026-02-28 01:01:13.051398 | orchestrator | TASK [service-check-containers : mariadb | Notify handlers to restart containers] *** 2026-02-28 01:01:13.051404 | orchestrator | Saturday 28 February 2026 00:58:14 +0000 (0:00:03.484) 0:00:38.411 ***** 2026-02-28 01:01:13.051416 | orchestrator | changed: [testbed-node-0] => { 2026-02-28 01:01:13.051423 | orchestrator |  "msg": "Notifying handlers" 2026-02-28 01:01:13.051430 | orchestrator | } 2026-02-28 01:01:13.051437 | orchestrator | changed: [testbed-node-1] => { 2026-02-28 01:01:13.051444 | orchestrator |  "msg": "Notifying handlers" 2026-02-28 01:01:13.051450 | orchestrator | } 2026-02-28 01:01:13.051457 | orchestrator | changed: [testbed-node-2] => { 2026-02-28 01:01:13.051464 | orchestrator |  "msg": "Notifying handlers" 2026-02-28 01:01:13.051471 | orchestrator | } 2026-02-28 01:01:13.051478 | orchestrator | 2026-02-28 01:01:13.051484 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-28 01:01:13.051491 | orchestrator | Saturday 28 February 2026 00:58:14 +0000 (0:00:00.591) 0:00:39.003 ***** 2026-02-28 01:01:13.051498 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-28 01:01:13.051515 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-28 01:01:13.051527 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:01:13.051534 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:01:13.051542 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-28 01:01:13.051549 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:01:13.051556 | orchestrator | 2026-02-28 01:01:13.051563 | orchestrator | TASK [mariadb : Checking for mariadb cluster] ********************************** 2026-02-28 01:01:13.051570 | orchestrator | Saturday 28 February 2026 00:58:17 +0000 (0:00:02.614) 0:00:41.618 ***** 2026-02-28 01:01:13.051576 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:01:13.051583 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:01:13.051590 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:01:13.051597 | orchestrator | 2026-02-28 01:01:13.051604 | orchestrator | TASK [mariadb : Cleaning up temp file on localhost] **************************** 2026-02-28 01:01:13.051610 | orchestrator | Saturday 28 February 2026 00:58:17 +0000 (0:00:00.296) 0:00:41.915 ***** 2026-02-28 01:01:13.051617 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:01:13.051624 | orchestrator | 2026-02-28 01:01:13.051682 | orchestrator | TASK [mariadb : Stop MariaDB containers] *************************************** 2026-02-28 01:01:13.051690 | orchestrator | Saturday 28 February 2026 00:58:17 +0000 (0:00:00.105) 0:00:42.020 ***** 2026-02-28 01:01:13.051697 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:01:13.051703 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:01:13.051710 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:01:13.051717 | orchestrator | 2026-02-28 01:01:13.051724 | orchestrator | TASK [mariadb : Run MariaDB wsrep recovery] ************************************ 2026-02-28 01:01:13.051730 | orchestrator | Saturday 28 February 2026 00:58:18 +0000 (0:00:00.453) 0:00:42.474 ***** 2026-02-28 01:01:13.051742 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:01:13.051749 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:01:13.051762 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:01:13.051769 | orchestrator | 2026-02-28 01:01:13.051775 | orchestrator | TASK [mariadb : Copying MariaDB log file to /tmp] ****************************** 2026-02-28 01:01:13.051782 | orchestrator | Saturday 28 February 2026 00:58:18 +0000 (0:00:00.317) 0:00:42.791 ***** 2026-02-28 01:01:13.051789 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:01:13.051796 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:01:13.051807 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:01:13.051820 | orchestrator | 2026-02-28 01:01:13.051841 | orchestrator | TASK [mariadb : Get MariaDB wsrep recovery seqno] ****************************** 2026-02-28 01:01:13.051852 | orchestrator | Saturday 28 February 2026 00:58:18 +0000 (0:00:00.319) 0:00:43.111 ***** 2026-02-28 01:01:13.051863 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:01:13.051873 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:01:13.051883 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:01:13.051895 | orchestrator | 2026-02-28 01:01:13.051907 | orchestrator | TASK [mariadb : Removing MariaDB log file from /tmp] *************************** 2026-02-28 01:01:13.051920 | orchestrator | Saturday 28 February 2026 00:58:19 +0000 (0:00:00.394) 0:00:43.506 ***** 2026-02-28 01:01:13.051930 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:01:13.051939 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:01:13.051946 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:01:13.051954 | orchestrator | 2026-02-28 01:01:13.051960 | orchestrator | TASK [mariadb : Registering MariaDB seqno variable] **************************** 2026-02-28 01:01:13.051967 | orchestrator | Saturday 28 February 2026 00:58:19 +0000 (0:00:00.483) 0:00:43.989 ***** 2026-02-28 01:01:13.051974 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:01:13.051981 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:01:13.051987 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:01:13.051994 | orchestrator | 2026-02-28 01:01:13.052001 | orchestrator | TASK [mariadb : Comparing seqno value on all mariadb hosts] ******************** 2026-02-28 01:01:13.052007 | orchestrator | Saturday 28 February 2026 00:58:20 +0000 (0:00:00.303) 0:00:44.292 ***** 2026-02-28 01:01:13.052014 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-28 01:01:13.052021 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-28 01:01:13.052028 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-28 01:01:13.052035 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-02-28 01:01:13.052041 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-02-28 01:01:13.052048 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-02-28 01:01:13.052055 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:01:13.052062 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:01:13.052068 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-02-28 01:01:13.052075 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-02-28 01:01:13.052082 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-02-28 01:01:13.052088 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:01:13.052095 | orchestrator | 2026-02-28 01:01:13.052102 | orchestrator | TASK [mariadb : Writing hostname of host with the largest seqno to temp file] *** 2026-02-28 01:01:13.052109 | orchestrator | Saturday 28 February 2026 00:58:20 +0000 (0:00:00.325) 0:00:44.617 ***** 2026-02-28 01:01:13.052115 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:01:13.052122 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:01:13.052129 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:01:13.052135 | orchestrator | 2026-02-28 01:01:13.052142 | orchestrator | TASK [mariadb : Registering mariadb_recover_inventory_name from temp file] ***** 2026-02-28 01:01:13.052149 | orchestrator | Saturday 28 February 2026 00:58:20 +0000 (0:00:00.303) 0:00:44.920 ***** 2026-02-28 01:01:13.052156 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:01:13.052162 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:01:13.052169 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:01:13.052182 | orchestrator | 2026-02-28 01:01:13.052189 | orchestrator | TASK [mariadb : Store bootstrap and master hostnames into facts] *************** 2026-02-28 01:01:13.052196 | orchestrator | Saturday 28 February 2026 00:58:21 +0000 (0:00:00.321) 0:00:45.242 ***** 2026-02-28 01:01:13.052202 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:01:13.052209 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:01:13.052216 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:01:13.052222 | orchestrator | 2026-02-28 01:01:13.052229 | orchestrator | TASK [mariadb : Set grastate.dat file from MariaDB container in bootstrap host] *** 2026-02-28 01:01:13.052236 | orchestrator | Saturday 28 February 2026 00:58:21 +0000 (0:00:00.539) 0:00:45.781 ***** 2026-02-28 01:01:13.052243 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:01:13.052250 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:01:13.052256 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:01:13.052263 | orchestrator | 2026-02-28 01:01:13.052270 | orchestrator | TASK [mariadb : Starting first MariaDB container] ****************************** 2026-02-28 01:01:13.052277 | orchestrator | Saturday 28 February 2026 00:58:21 +0000 (0:00:00.342) 0:00:46.124 ***** 2026-02-28 01:01:13.052283 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:01:13.052290 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:01:13.052297 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:01:13.052304 | orchestrator | 2026-02-28 01:01:13.052310 | orchestrator | TASK [mariadb : Wait for first MariaDB container] ****************************** 2026-02-28 01:01:13.052321 | orchestrator | Saturday 28 February 2026 00:58:22 +0000 (0:00:00.334) 0:00:46.459 ***** 2026-02-28 01:01:13.052332 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:01:13.052344 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:01:13.052354 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:01:13.052365 | orchestrator | 2026-02-28 01:01:13.052376 | orchestrator | TASK [mariadb : Set first MariaDB container as primary] ************************ 2026-02-28 01:01:13.052388 | orchestrator | Saturday 28 February 2026 00:58:22 +0000 (0:00:00.333) 0:00:46.792 ***** 2026-02-28 01:01:13.052399 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:01:13.052406 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:01:13.052413 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:01:13.052419 | orchestrator | 2026-02-28 01:01:13.052426 | orchestrator | TASK [mariadb : Wait for MariaDB to become operational] ************************ 2026-02-28 01:01:13.052443 | orchestrator | Saturday 28 February 2026 00:58:23 +0000 (0:00:00.586) 0:00:47.379 ***** 2026-02-28 01:01:13.052455 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:01:13.052467 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:01:13.052478 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:01:13.052490 | orchestrator | 2026-02-28 01:01:13.052502 | orchestrator | TASK [mariadb : Restart slave MariaDB container(s)] **************************** 2026-02-28 01:01:13.052514 | orchestrator | Saturday 28 February 2026 00:58:23 +0000 (0:00:00.345) 0:00:47.724 ***** 2026-02-28 01:01:13.052529 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-28 01:01:13.052546 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:01:13.052559 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-28 01:01:13.052571 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:01:13.052596 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-28 01:01:13.052622 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:01:13.052683 | orchestrator | 2026-02-28 01:01:13.052695 | orchestrator | TASK [mariadb : Wait for slave MariaDB] **************************************** 2026-02-28 01:01:13.052707 | orchestrator | Saturday 28 February 2026 00:58:25 +0000 (0:00:02.351) 0:00:50.075 ***** 2026-02-28 01:01:13.052719 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:01:13.052730 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:01:13.052742 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:01:13.052754 | orchestrator | 2026-02-28 01:01:13.052765 | orchestrator | TASK [mariadb : Restart master MariaDB container(s)] *************************** 2026-02-28 01:01:13.052777 | orchestrator | Saturday 28 February 2026 00:58:26 +0000 (0:00:00.349) 0:00:50.425 ***** 2026-02-28 01:01:13.052789 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-28 01:01:13.052808 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:01:13.052854 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-28 01:01:13.052876 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:01:13.052889 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-28 01:01:13.052902 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:01:13.052914 | orchestrator | 2026-02-28 01:01:13.052925 | orchestrator | TASK [mariadb : Wait for master mariadb] *************************************** 2026-02-28 01:01:13.052936 | orchestrator | Saturday 28 February 2026 00:58:28 +0000 (0:00:02.363) 0:00:52.788 ***** 2026-02-28 01:01:13.052947 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:01:13.052959 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:01:13.052970 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:01:13.052981 | orchestrator | 2026-02-28 01:01:13.052992 | orchestrator | TASK [service-check : mariadb | Get container facts] *************************** 2026-02-28 01:01:13.053010 | orchestrator | Saturday 28 February 2026 00:58:28 +0000 (0:00:00.329) 0:00:53.118 ***** 2026-02-28 01:01:13.053022 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:01:13.053033 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:01:13.053044 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:01:13.053056 | orchestrator | 2026-02-28 01:01:13.053067 | orchestrator | TASK [service-check : mariadb | Fail if containers are missing or not running] *** 2026-02-28 01:01:13.053078 | orchestrator | Saturday 28 February 2026 00:58:29 +0000 (0:00:00.333) 0:00:53.451 ***** 2026-02-28 01:01:13.053091 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:01:13.053109 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:01:13.053128 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:01:13.053139 | orchestrator | 2026-02-28 01:01:13.053151 | orchestrator | TASK [service-check : mariadb | Fail if containers are unhealthy] ************** 2026-02-28 01:01:13.053161 | orchestrator | Saturday 28 February 2026 00:58:29 +0000 (0:00:00.324) 0:00:53.776 ***** 2026-02-28 01:01:13.053171 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:01:13.053180 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:01:13.053189 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:01:13.053199 | orchestrator | 2026-02-28 01:01:13.053210 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2026-02-28 01:01:13.053222 | orchestrator | Saturday 28 February 2026 00:58:30 +0000 (0:00:00.775) 0:00:54.551 ***** 2026-02-28 01:01:13.053233 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:01:13.053244 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:01:13.053256 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:01:13.053267 | orchestrator | 2026-02-28 01:01:13.053278 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2026-02-28 01:01:13.053290 | orchestrator | Saturday 28 February 2026 00:58:30 +0000 (0:00:00.358) 0:00:54.910 ***** 2026-02-28 01:01:13.053301 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:01:13.053312 | orchestrator | changed: [testbed-node-1] 2026-02-28 01:01:13.053323 | orchestrator | changed: [testbed-node-2] 2026-02-28 01:01:13.053334 | orchestrator | 2026-02-28 01:01:13.053345 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2026-02-28 01:01:13.053356 | orchestrator | Saturday 28 February 2026 00:58:31 +0000 (0:00:00.872) 0:00:55.782 ***** 2026-02-28 01:01:13.053366 | orchestrator | ok: [testbed-node-0] 2026-02-28 01:01:13.053376 | orchestrator | ok: [testbed-node-1] 2026-02-28 01:01:13.053388 | orchestrator | ok: [testbed-node-2] 2026-02-28 01:01:13.053400 | orchestrator | 2026-02-28 01:01:13.053411 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2026-02-28 01:01:13.053423 | orchestrator | Saturday 28 February 2026 00:58:32 +0000 (0:00:00.602) 0:00:56.385 ***** 2026-02-28 01:01:13.053434 | orchestrator | ok: [testbed-node-0] 2026-02-28 01:01:13.053445 | orchestrator | ok: [testbed-node-1] 2026-02-28 01:01:13.053457 | orchestrator | ok: [testbed-node-2] 2026-02-28 01:01:13.053468 | orchestrator | 2026-02-28 01:01:13.053479 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2026-02-28 01:01:13.053491 | orchestrator | Saturday 28 February 2026 00:58:32 +0000 (0:00:00.380) 0:00:56.766 ***** 2026-02-28 01:01:13.053503 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2026-02-28 01:01:13.053516 | orchestrator | ...ignoring 2026-02-28 01:01:13.053528 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2026-02-28 01:01:13.053540 | orchestrator | ...ignoring 2026-02-28 01:01:13.053552 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2026-02-28 01:01:13.053563 | orchestrator | ...ignoring 2026-02-28 01:01:13.053575 | orchestrator | 2026-02-28 01:01:13.053586 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2026-02-28 01:01:13.053598 | orchestrator | Saturday 28 February 2026 00:58:43 +0000 (0:00:10.760) 0:01:07.526 ***** 2026-02-28 01:01:13.053609 | orchestrator | ok: [testbed-node-0] 2026-02-28 01:01:13.053620 | orchestrator | ok: [testbed-node-1] 2026-02-28 01:01:13.053648 | orchestrator | ok: [testbed-node-2] 2026-02-28 01:01:13.053660 | orchestrator | 2026-02-28 01:01:13.053672 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2026-02-28 01:01:13.053683 | orchestrator | Saturday 28 February 2026 00:58:43 +0000 (0:00:00.376) 0:01:07.903 ***** 2026-02-28 01:01:13.053694 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:01:13.053706 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:01:13.053725 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:01:13.053737 | orchestrator | 2026-02-28 01:01:13.053749 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2026-02-28 01:01:13.053760 | orchestrator | Saturday 28 February 2026 00:58:44 +0000 (0:00:00.569) 0:01:08.473 ***** 2026-02-28 01:01:13.053771 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:01:13.053783 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:01:13.053794 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:01:13.053805 | orchestrator | 2026-02-28 01:01:13.053817 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2026-02-28 01:01:13.053829 | orchestrator | Saturday 28 February 2026 00:58:44 +0000 (0:00:00.363) 0:01:08.837 ***** 2026-02-28 01:01:13.053840 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:01:13.053851 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:01:13.053862 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:01:13.053874 | orchestrator | 2026-02-28 01:01:13.053886 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2026-02-28 01:01:13.053897 | orchestrator | Saturday 28 February 2026 00:58:45 +0000 (0:00:00.337) 0:01:09.175 ***** 2026-02-28 01:01:13.053909 | orchestrator | ok: [testbed-node-0] 2026-02-28 01:01:13.053920 | orchestrator | ok: [testbed-node-1] 2026-02-28 01:01:13.053931 | orchestrator | ok: [testbed-node-2] 2026-02-28 01:01:13.053942 | orchestrator | 2026-02-28 01:01:13.053953 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2026-02-28 01:01:13.053965 | orchestrator | Saturday 28 February 2026 00:58:45 +0000 (0:00:00.350) 0:01:09.525 ***** 2026-02-28 01:01:13.053976 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:01:13.053995 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:01:13.054007 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:01:13.054076 | orchestrator | 2026-02-28 01:01:13.054089 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-02-28 01:01:13.054101 | orchestrator | Saturday 28 February 2026 00:58:45 +0000 (0:00:00.596) 0:01:10.122 ***** 2026-02-28 01:01:13.054112 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:01:13.054124 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:01:13.054141 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2026-02-28 01:01:13.054153 | orchestrator | 2026-02-28 01:01:13.054165 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2026-02-28 01:01:13.054177 | orchestrator | Saturday 28 February 2026 00:58:46 +0000 (0:00:00.416) 0:01:10.538 ***** 2026-02-28 01:01:13.054188 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:01:13.054199 | orchestrator | 2026-02-28 01:01:13.054210 | orchestrator | TASK [mariadb : Store bootstrap host name into facts] ************************** 2026-02-28 01:01:13.054222 | orchestrator | Saturday 28 February 2026 00:58:56 +0000 (0:00:10.550) 0:01:21.088 ***** 2026-02-28 01:01:13.054234 | orchestrator | ok: [testbed-node-0] 2026-02-28 01:01:13.054246 | orchestrator | 2026-02-28 01:01:13.054259 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-02-28 01:01:13.054271 | orchestrator | Saturday 28 February 2026 00:58:57 +0000 (0:00:00.151) 0:01:21.240 ***** 2026-02-28 01:01:13.054282 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:01:13.054293 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:01:13.054305 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:01:13.054317 | orchestrator | 2026-02-28 01:01:13.054329 | orchestrator | RUNNING HANDLER [mariadb : Starting first MariaDB container] ******************* 2026-02-28 01:01:13.054341 | orchestrator | Saturday 28 February 2026 00:58:58 +0000 (0:00:00.980) 0:01:22.221 ***** 2026-02-28 01:01:13.054352 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:01:13.054364 | orchestrator | 2026-02-28 01:01:13.054376 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service port liveness] ******* 2026-02-28 01:01:13.054388 | orchestrator | Saturday 28 February 2026 00:59:06 +0000 (0:00:08.741) 0:01:30.962 ***** 2026-02-28 01:01:13.054400 | orchestrator | ok: [testbed-node-0] 2026-02-28 01:01:13.054412 | orchestrator | 2026-02-28 01:01:13.054429 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service to sync WSREP] ******* 2026-02-28 01:01:13.054441 | orchestrator | Saturday 28 February 2026 00:59:09 +0000 (0:00:02.677) 0:01:33.639 ***** 2026-02-28 01:01:13.054452 | orchestrator | ok: [testbed-node-0] 2026-02-28 01:01:13.054464 | orchestrator | 2026-02-28 01:01:13.054476 | orchestrator | RUNNING HANDLER [mariadb : Ensure MariaDB is running normally on bootstrap host] *** 2026-02-28 01:01:13.054488 | orchestrator | Saturday 28 February 2026 00:59:11 +0000 (0:00:02.380) 0:01:36.020 ***** 2026-02-28 01:01:13.054500 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:01:13.054512 | orchestrator | 2026-02-28 01:01:13.054524 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2026-02-28 01:01:13.054535 | orchestrator | Saturday 28 February 2026 00:59:12 +0000 (0:00:00.167) 0:01:36.187 ***** 2026-02-28 01:01:13.054547 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:01:13.054559 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:01:13.054571 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:01:13.054583 | orchestrator | 2026-02-28 01:01:13.054595 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2026-02-28 01:01:13.054606 | orchestrator | Saturday 28 February 2026 00:59:12 +0000 (0:00:00.396) 0:01:36.584 ***** 2026-02-28 01:01:13.054618 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:01:13.054644 | orchestrator | changed: [testbed-node-1] 2026-02-28 01:01:13.054656 | orchestrator | changed: [testbed-node-2] 2026-02-28 01:01:13.054668 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2026-02-28 01:01:13.054679 | orchestrator | 2026-02-28 01:01:13.054690 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-02-28 01:01:13.054702 | orchestrator | skipping: no hosts matched 2026-02-28 01:01:13.054714 | orchestrator | 2026-02-28 01:01:13.054725 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-02-28 01:01:13.054736 | orchestrator | 2026-02-28 01:01:13.054747 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-02-28 01:01:13.054759 | orchestrator | Saturday 28 February 2026 00:59:13 +0000 (0:00:00.614) 0:01:37.199 ***** 2026-02-28 01:01:13.054770 | orchestrator | changed: [testbed-node-1] 2026-02-28 01:01:13.054782 | orchestrator | 2026-02-28 01:01:13.054794 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-02-28 01:01:13.054805 | orchestrator | Saturday 28 February 2026 00:59:32 +0000 (0:00:19.582) 0:01:56.781 ***** 2026-02-28 01:01:13.054816 | orchestrator | ok: [testbed-node-1] 2026-02-28 01:01:13.054828 | orchestrator | 2026-02-28 01:01:13.054839 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-02-28 01:01:13.054851 | orchestrator | Saturday 28 February 2026 00:59:48 +0000 (0:00:15.722) 0:02:12.504 ***** 2026-02-28 01:01:13.054862 | orchestrator | ok: [testbed-node-1] 2026-02-28 01:01:13.054873 | orchestrator | 2026-02-28 01:01:13.054885 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-02-28 01:01:13.054897 | orchestrator | 2026-02-28 01:01:13.054908 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-02-28 01:01:13.054919 | orchestrator | Saturday 28 February 2026 00:59:50 +0000 (0:00:02.158) 0:02:14.663 ***** 2026-02-28 01:01:13.054931 | orchestrator | changed: [testbed-node-2] 2026-02-28 01:01:13.054942 | orchestrator | 2026-02-28 01:01:13.054953 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-02-28 01:01:13.054965 | orchestrator | Saturday 28 February 2026 01:00:10 +0000 (0:00:19.678) 0:02:34.342 ***** 2026-02-28 01:01:13.054976 | orchestrator | ok: [testbed-node-2] 2026-02-28 01:01:13.054988 | orchestrator | 2026-02-28 01:01:13.054999 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-02-28 01:01:13.055010 | orchestrator | Saturday 28 February 2026 01:00:25 +0000 (0:00:15.673) 0:02:50.015 ***** 2026-02-28 01:01:13.055021 | orchestrator | ok: [testbed-node-2] 2026-02-28 01:01:13.055032 | orchestrator | 2026-02-28 01:01:13.055044 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2026-02-28 01:01:13.055063 | orchestrator | 2026-02-28 01:01:13.055082 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-02-28 01:01:13.055094 | orchestrator | Saturday 28 February 2026 01:00:28 +0000 (0:00:02.583) 0:02:52.598 ***** 2026-02-28 01:01:13.055105 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:01:13.055116 | orchestrator | 2026-02-28 01:01:13.055127 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-02-28 01:01:13.055139 | orchestrator | Saturday 28 February 2026 01:00:47 +0000 (0:00:18.685) 0:03:11.284 ***** 2026-02-28 01:01:13.055151 | orchestrator | ok: [testbed-node-0] 2026-02-28 01:01:13.055162 | orchestrator | 2026-02-28 01:01:13.055179 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-02-28 01:01:13.055191 | orchestrator | Saturday 28 February 2026 01:00:48 +0000 (0:00:01.665) 0:03:12.950 ***** 2026-02-28 01:01:13.055202 | orchestrator | ok: [testbed-node-0] 2026-02-28 01:01:13.055213 | orchestrator | 2026-02-28 01:01:13.055224 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2026-02-28 01:01:13.055235 | orchestrator | 2026-02-28 01:01:13.055246 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2026-02-28 01:01:13.055259 | orchestrator | Saturday 28 February 2026 01:00:51 +0000 (0:00:02.450) 0:03:15.401 ***** 2026-02-28 01:01:13.055266 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 01:01:13.055273 | orchestrator | 2026-02-28 01:01:13.055279 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2026-02-28 01:01:13.055286 | orchestrator | Saturday 28 February 2026 01:00:51 +0000 (0:00:00.544) 0:03:15.945 ***** 2026-02-28 01:01:13.055293 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:01:13.055300 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:01:13.055306 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:01:13.055313 | orchestrator | 2026-02-28 01:01:13.055320 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2026-02-28 01:01:13.055326 | orchestrator | Saturday 28 February 2026 01:00:54 +0000 (0:00:02.279) 0:03:18.225 ***** 2026-02-28 01:01:13.055333 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:01:13.055340 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:01:13.055346 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:01:13.055353 | orchestrator | 2026-02-28 01:01:13.055360 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2026-02-28 01:01:13.055367 | orchestrator | Saturday 28 February 2026 01:00:56 +0000 (0:00:02.603) 0:03:20.828 ***** 2026-02-28 01:01:13.055373 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:01:13.055380 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:01:13.055387 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:01:13.055393 | orchestrator | 2026-02-28 01:01:13.055400 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2026-02-28 01:01:13.055407 | orchestrator | Saturday 28 February 2026 01:00:59 +0000 (0:00:02.380) 0:03:23.208 ***** 2026-02-28 01:01:13.055414 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:01:13.055420 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:01:13.055427 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:01:13.055434 | orchestrator | 2026-02-28 01:01:13.055441 | orchestrator | TASK [service-check : mariadb | Get container facts] *************************** 2026-02-28 01:01:13.055447 | orchestrator | Saturday 28 February 2026 01:01:01 +0000 (0:00:02.318) 0:03:25.526 ***** 2026-02-28 01:01:13.055454 | orchestrator | ok: [testbed-node-1] 2026-02-28 01:01:13.055461 | orchestrator | ok: [testbed-node-0] 2026-02-28 01:01:13.055468 | orchestrator | ok: [testbed-node-2] 2026-02-28 01:01:13.055474 | orchestrator | 2026-02-28 01:01:13.055481 | orchestrator | TASK [service-check : mariadb | Fail if containers are missing or not running] *** 2026-02-28 01:01:13.055488 | orchestrator | Saturday 28 February 2026 01:01:06 +0000 (0:00:04.770) 0:03:30.297 ***** 2026-02-28 01:01:13.055494 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:01:13.055501 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:01:13.055514 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:01:13.055521 | orchestrator | 2026-02-28 01:01:13.055528 | orchestrator | TASK [service-check : mariadb | Fail if containers are unhealthy] ************** 2026-02-28 01:01:13.055534 | orchestrator | Saturday 28 February 2026 01:01:08 +0000 (0:00:02.566) 0:03:32.864 ***** 2026-02-28 01:01:13.055541 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:01:13.055548 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:01:13.055554 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:01:13.055561 | orchestrator | 2026-02-28 01:01:13.055568 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2026-02-28 01:01:13.055575 | orchestrator | Saturday 28 February 2026 01:01:09 +0000 (0:00:00.607) 0:03:33.472 ***** 2026-02-28 01:01:13.055582 | orchestrator | ok: [testbed-node-0] 2026-02-28 01:01:13.055588 | orchestrator | ok: [testbed-node-2] 2026-02-28 01:01:13.055595 | orchestrator | ok: [testbed-node-1] 2026-02-28 01:01:13.055602 | orchestrator | 2026-02-28 01:01:13.055609 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2026-02-28 01:01:13.055615 | orchestrator | Saturday 28 February 2026 01:01:11 +0000 (0:00:02.595) 0:03:36.067 ***** 2026-02-28 01:01:13.055622 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:01:13.055652 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:01:13.055661 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:01:13.055668 | orchestrator | 2026-02-28 01:01:13.055674 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-28 01:01:13.055681 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2026-02-28 01:01:13.055689 | orchestrator | testbed-node-0 : ok=36  changed=17  unreachable=0 failed=0 skipped=39  rescued=0 ignored=1  2026-02-28 01:01:13.055697 | orchestrator | testbed-node-1 : ok=22  changed=8  unreachable=0 failed=0 skipped=45  rescued=0 ignored=1  2026-02-28 01:01:13.055704 | orchestrator | testbed-node-2 : ok=22  changed=8  unreachable=0 failed=0 skipped=45  rescued=0 ignored=1  2026-02-28 01:01:13.055711 | orchestrator | 2026-02-28 01:01:13.055718 | orchestrator | 2026-02-28 01:01:13.055730 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-28 01:01:13.055737 | orchestrator | Saturday 28 February 2026 01:01:12 +0000 (0:00:00.512) 0:03:36.580 ***** 2026-02-28 01:01:13.055744 | orchestrator | =============================================================================== 2026-02-28 01:01:13.055750 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 39.26s 2026-02-28 01:01:13.055757 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 31.40s 2026-02-28 01:01:13.055768 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 18.69s 2026-02-28 01:01:13.055775 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 10.76s 2026-02-28 01:01:13.055782 | orchestrator | mariadb : Running MariaDB bootstrap container -------------------------- 10.55s 2026-02-28 01:01:13.055789 | orchestrator | mariadb : Starting first MariaDB container ------------------------------ 8.74s 2026-02-28 01:01:13.055796 | orchestrator | mariadb : Copying over config.json files for services ------------------- 4.96s 2026-02-28 01:01:13.055802 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 4.80s 2026-02-28 01:01:13.055809 | orchestrator | service-check : mariadb | Get container facts --------------------------- 4.77s 2026-02-28 01:01:13.055816 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 4.74s 2026-02-28 01:01:13.055823 | orchestrator | service-cert-copy : mariadb | Copying over extra CA certificates -------- 4.33s 2026-02-28 01:01:13.055830 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS key ----- 3.97s 2026-02-28 01:01:13.055836 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 3.55s 2026-02-28 01:01:13.055848 | orchestrator | service-check-containers : mariadb | Check containers ------------------- 3.48s 2026-02-28 01:01:13.055855 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS certificate --- 3.35s 2026-02-28 01:01:13.055862 | orchestrator | Check MariaDB service --------------------------------------------------- 3.01s 2026-02-28 01:01:13.055869 | orchestrator | mariadb : Wait for first MariaDB service port liveness ------------------ 2.68s 2026-02-28 01:01:13.055875 | orchestrator | service-check-containers : Include tasks -------------------------------- 2.61s 2026-02-28 01:01:13.055882 | orchestrator | mariadb : Creating mysql monitor user ----------------------------------- 2.60s 2026-02-28 01:01:13.055889 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 2.60s 2026-02-28 01:01:13.055896 | orchestrator | 2026-02-28 01:01:13 | INFO  | Task 298724df-a6a6-4aa6-bc71-a267cc5fa183 is in state STARTED 2026-02-28 01:01:13.055903 | orchestrator | 2026-02-28 01:01:13 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:01:16.101273 | orchestrator | 2026-02-28 01:01:16 | INFO  | Task ed7b2359-c4a4-4150-882c-a92b5a1409ee is in state STARTED 2026-02-28 01:01:16.105587 | orchestrator | 2026-02-28 01:01:16 | INFO  | Task ade46703-9859-4d9b-9981-77970d56ece4 is in state STARTED 2026-02-28 01:01:16.111411 | orchestrator | 2026-02-28 01:01:16 | INFO  | Task 298724df-a6a6-4aa6-bc71-a267cc5fa183 is in state STARTED 2026-02-28 01:01:16.111464 | orchestrator | 2026-02-28 01:01:16 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:01:19.168377 | orchestrator | 2026-02-28 01:01:19 | INFO  | Task ed7b2359-c4a4-4150-882c-a92b5a1409ee is in state STARTED 2026-02-28 01:01:19.169851 | orchestrator | 2026-02-28 01:01:19 | INFO  | Task ade46703-9859-4d9b-9981-77970d56ece4 is in state STARTED 2026-02-28 01:01:19.171147 | orchestrator | 2026-02-28 01:01:19 | INFO  | Task 298724df-a6a6-4aa6-bc71-a267cc5fa183 is in state STARTED 2026-02-28 01:01:19.171178 | orchestrator | 2026-02-28 01:01:19 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:01:22.221707 | orchestrator | 2026-02-28 01:01:22 | INFO  | Task ed7b2359-c4a4-4150-882c-a92b5a1409ee is in state STARTED 2026-02-28 01:01:22.222927 | orchestrator | 2026-02-28 01:01:22 | INFO  | Task ade46703-9859-4d9b-9981-77970d56ece4 is in state STARTED 2026-02-28 01:01:22.224564 | orchestrator | 2026-02-28 01:01:22 | INFO  | Task 298724df-a6a6-4aa6-bc71-a267cc5fa183 is in state STARTED 2026-02-28 01:01:22.224615 | orchestrator | 2026-02-28 01:01:22 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:01:25.264313 | orchestrator | 2026-02-28 01:01:25 | INFO  | Task ed7b2359-c4a4-4150-882c-a92b5a1409ee is in state STARTED 2026-02-28 01:01:25.265780 | orchestrator | 2026-02-28 01:01:25 | INFO  | Task ade46703-9859-4d9b-9981-77970d56ece4 is in state STARTED 2026-02-28 01:01:25.270133 | orchestrator | 2026-02-28 01:01:25 | INFO  | Task 298724df-a6a6-4aa6-bc71-a267cc5fa183 is in state STARTED 2026-02-28 01:01:25.270213 | orchestrator | 2026-02-28 01:01:25 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:01:28.317229 | orchestrator | 2026-02-28 01:01:28 | INFO  | Task ed7b2359-c4a4-4150-882c-a92b5a1409ee is in state STARTED 2026-02-28 01:01:28.317343 | orchestrator | 2026-02-28 01:01:28 | INFO  | Task ade46703-9859-4d9b-9981-77970d56ece4 is in state STARTED 2026-02-28 01:01:28.321027 | orchestrator | 2026-02-28 01:01:28 | INFO  | Task 298724df-a6a6-4aa6-bc71-a267cc5fa183 is in state SUCCESS 2026-02-28 01:01:28.322676 | orchestrator | 2026-02-28 01:01:28.322759 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-02-28 01:01:28.323090 | orchestrator | 2.16.14 2026-02-28 01:01:28.323113 | orchestrator | 2026-02-28 01:01:28.323125 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2026-02-28 01:01:28.323164 | orchestrator | 2026-02-28 01:01:28.323192 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-28 01:01:28.323204 | orchestrator | Saturday 28 February 2026 00:59:09 +0000 (0:00:00.814) 0:00:00.814 ***** 2026-02-28 01:01:28.323216 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-28 01:01:28.323228 | orchestrator | 2026-02-28 01:01:28.323239 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-28 01:01:28.323250 | orchestrator | Saturday 28 February 2026 00:59:10 +0000 (0:00:00.803) 0:00:01.618 ***** 2026-02-28 01:01:28.323262 | orchestrator | ok: [testbed-node-3] 2026-02-28 01:01:28.323273 | orchestrator | ok: [testbed-node-5] 2026-02-28 01:01:28.323284 | orchestrator | ok: [testbed-node-4] 2026-02-28 01:01:28.323295 | orchestrator | 2026-02-28 01:01:28.323306 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-28 01:01:28.323318 | orchestrator | Saturday 28 February 2026 00:59:11 +0000 (0:00:00.718) 0:00:02.337 ***** 2026-02-28 01:01:28.323328 | orchestrator | ok: [testbed-node-3] 2026-02-28 01:01:28.323340 | orchestrator | ok: [testbed-node-4] 2026-02-28 01:01:28.323351 | orchestrator | ok: [testbed-node-5] 2026-02-28 01:01:28.323362 | orchestrator | 2026-02-28 01:01:28.323390 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-28 01:01:28.323413 | orchestrator | Saturday 28 February 2026 00:59:11 +0000 (0:00:00.454) 0:00:02.791 ***** 2026-02-28 01:01:28.323424 | orchestrator | ok: [testbed-node-3] 2026-02-28 01:01:28.323435 | orchestrator | ok: [testbed-node-4] 2026-02-28 01:01:28.323446 | orchestrator | ok: [testbed-node-5] 2026-02-28 01:01:28.323457 | orchestrator | 2026-02-28 01:01:28.323467 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-28 01:01:28.323478 | orchestrator | Saturday 28 February 2026 00:59:13 +0000 (0:00:01.157) 0:00:03.949 ***** 2026-02-28 01:01:28.323489 | orchestrator | ok: [testbed-node-3] 2026-02-28 01:01:28.323500 | orchestrator | ok: [testbed-node-4] 2026-02-28 01:01:28.323511 | orchestrator | ok: [testbed-node-5] 2026-02-28 01:01:28.323522 | orchestrator | 2026-02-28 01:01:28.323533 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-28 01:01:28.323544 | orchestrator | Saturday 28 February 2026 00:59:13 +0000 (0:00:00.359) 0:00:04.309 ***** 2026-02-28 01:01:28.323555 | orchestrator | ok: [testbed-node-3] 2026-02-28 01:01:28.323566 | orchestrator | ok: [testbed-node-4] 2026-02-28 01:01:28.323577 | orchestrator | ok: [testbed-node-5] 2026-02-28 01:01:28.323587 | orchestrator | 2026-02-28 01:01:28.323598 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-28 01:01:28.323609 | orchestrator | Saturday 28 February 2026 00:59:13 +0000 (0:00:00.389) 0:00:04.699 ***** 2026-02-28 01:01:28.323620 | orchestrator | ok: [testbed-node-3] 2026-02-28 01:01:28.323721 | orchestrator | ok: [testbed-node-4] 2026-02-28 01:01:28.323736 | orchestrator | ok: [testbed-node-5] 2026-02-28 01:01:28.323749 | orchestrator | 2026-02-28 01:01:28.323761 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-28 01:01:28.323775 | orchestrator | Saturday 28 February 2026 00:59:14 +0000 (0:00:00.361) 0:00:05.060 ***** 2026-02-28 01:01:28.323788 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:01:28.323802 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:01:28.323815 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:01:28.323827 | orchestrator | 2026-02-28 01:01:28.323840 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-28 01:01:28.323853 | orchestrator | Saturday 28 February 2026 00:59:14 +0000 (0:00:00.550) 0:00:05.611 ***** 2026-02-28 01:01:28.323866 | orchestrator | ok: [testbed-node-3] 2026-02-28 01:01:28.323879 | orchestrator | ok: [testbed-node-4] 2026-02-28 01:01:28.323892 | orchestrator | ok: [testbed-node-5] 2026-02-28 01:01:28.323904 | orchestrator | 2026-02-28 01:01:28.323917 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-28 01:01:28.323930 | orchestrator | Saturday 28 February 2026 00:59:15 +0000 (0:00:00.380) 0:00:05.991 ***** 2026-02-28 01:01:28.323950 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-28 01:01:28.323964 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-28 01:01:28.323977 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-28 01:01:28.323990 | orchestrator | 2026-02-28 01:01:28.324003 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-28 01:01:28.324016 | orchestrator | Saturday 28 February 2026 00:59:15 +0000 (0:00:00.761) 0:00:06.753 ***** 2026-02-28 01:01:28.324029 | orchestrator | ok: [testbed-node-3] 2026-02-28 01:01:28.324041 | orchestrator | ok: [testbed-node-4] 2026-02-28 01:01:28.324054 | orchestrator | ok: [testbed-node-5] 2026-02-28 01:01:28.324066 | orchestrator | 2026-02-28 01:01:28.324077 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-28 01:01:28.324088 | orchestrator | Saturday 28 February 2026 00:59:16 +0000 (0:00:00.509) 0:00:07.263 ***** 2026-02-28 01:01:28.324098 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-28 01:01:28.324110 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-28 01:01:28.324120 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-28 01:01:28.324131 | orchestrator | 2026-02-28 01:01:28.324142 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-28 01:01:28.324153 | orchestrator | Saturday 28 February 2026 00:59:18 +0000 (0:00:02.614) 0:00:09.878 ***** 2026-02-28 01:01:28.324165 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-02-28 01:01:28.324176 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-02-28 01:01:28.324187 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-02-28 01:01:28.324198 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:01:28.324210 | orchestrator | 2026-02-28 01:01:28.324270 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-28 01:01:28.324283 | orchestrator | Saturday 28 February 2026 00:59:19 +0000 (0:00:00.714) 0:00:10.592 ***** 2026-02-28 01:01:28.324304 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-28 01:01:28.324319 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-28 01:01:28.324330 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-28 01:01:28.324341 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:01:28.324352 | orchestrator | 2026-02-28 01:01:28.324363 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-28 01:01:28.324374 | orchestrator | Saturday 28 February 2026 00:59:20 +0000 (0:00:00.964) 0:00:11.557 ***** 2026-02-28 01:01:28.324387 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-28 01:01:28.324402 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-28 01:01:28.324421 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-28 01:01:28.324432 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:01:28.324443 | orchestrator | 2026-02-28 01:01:28.324454 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-28 01:01:28.324465 | orchestrator | Saturday 28 February 2026 00:59:21 +0000 (0:00:00.404) 0:00:11.961 ***** 2026-02-28 01:01:28.324477 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '56b36b4d893d', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-28 00:59:17.101729', 'end': '2026-02-28 00:59:17.148321', 'delta': '0:00:00.046592', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['56b36b4d893d'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-02-28 01:01:28.324492 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '0c25584c5b64', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-28 00:59:18.108116', 'end': '2026-02-28 00:59:18.167633', 'delta': '0:00:00.059517', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['0c25584c5b64'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-02-28 01:01:28.324538 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '9eeb95db97ce', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-28 00:59:18.722186', 'end': '2026-02-28 00:59:18.757279', 'delta': '0:00:00.035093', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['9eeb95db97ce'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-02-28 01:01:28.324551 | orchestrator | 2026-02-28 01:01:28.324563 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-28 01:01:28.324574 | orchestrator | Saturday 28 February 2026 00:59:21 +0000 (0:00:00.201) 0:00:12.163 ***** 2026-02-28 01:01:28.324585 | orchestrator | ok: [testbed-node-3] 2026-02-28 01:01:28.324596 | orchestrator | ok: [testbed-node-4] 2026-02-28 01:01:28.324607 | orchestrator | ok: [testbed-node-5] 2026-02-28 01:01:28.324619 | orchestrator | 2026-02-28 01:01:28.324652 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-28 01:01:28.324664 | orchestrator | Saturday 28 February 2026 00:59:21 +0000 (0:00:00.490) 0:00:12.653 ***** 2026-02-28 01:01:28.324682 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2026-02-28 01:01:28.324694 | orchestrator | 2026-02-28 01:01:28.324705 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-28 01:01:28.324715 | orchestrator | Saturday 28 February 2026 00:59:23 +0000 (0:00:01.916) 0:00:14.570 ***** 2026-02-28 01:01:28.324726 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:01:28.324738 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:01:28.324749 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:01:28.324759 | orchestrator | 2026-02-28 01:01:28.324770 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-28 01:01:28.324781 | orchestrator | Saturday 28 February 2026 00:59:23 +0000 (0:00:00.309) 0:00:14.879 ***** 2026-02-28 01:01:28.324792 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:01:28.324817 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:01:28.324829 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:01:28.324850 | orchestrator | 2026-02-28 01:01:28.324861 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-28 01:01:28.324872 | orchestrator | Saturday 28 February 2026 00:59:24 +0000 (0:00:00.462) 0:00:15.342 ***** 2026-02-28 01:01:28.324883 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:01:28.324894 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:01:28.324906 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:01:28.324916 | orchestrator | 2026-02-28 01:01:28.324927 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-28 01:01:28.324938 | orchestrator | Saturday 28 February 2026 00:59:24 +0000 (0:00:00.551) 0:00:15.894 ***** 2026-02-28 01:01:28.324950 | orchestrator | ok: [testbed-node-3] 2026-02-28 01:01:28.324961 | orchestrator | 2026-02-28 01:01:28.324972 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-28 01:01:28.324983 | orchestrator | Saturday 28 February 2026 00:59:25 +0000 (0:00:00.148) 0:00:16.042 ***** 2026-02-28 01:01:28.324994 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:01:28.325005 | orchestrator | 2026-02-28 01:01:28.325016 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-28 01:01:28.325027 | orchestrator | Saturday 28 February 2026 00:59:25 +0000 (0:00:00.299) 0:00:16.341 ***** 2026-02-28 01:01:28.325038 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:01:28.325049 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:01:28.325060 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:01:28.325071 | orchestrator | 2026-02-28 01:01:28.325082 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-28 01:01:28.325093 | orchestrator | Saturday 28 February 2026 00:59:25 +0000 (0:00:00.353) 0:00:16.695 ***** 2026-02-28 01:01:28.325104 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:01:28.325115 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:01:28.325126 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:01:28.325137 | orchestrator | 2026-02-28 01:01:28.325148 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-28 01:01:28.325159 | orchestrator | Saturday 28 February 2026 00:59:26 +0000 (0:00:00.323) 0:00:17.019 ***** 2026-02-28 01:01:28.325169 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:01:28.325180 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:01:28.325191 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:01:28.325202 | orchestrator | 2026-02-28 01:01:28.325212 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-28 01:01:28.325223 | orchestrator | Saturday 28 February 2026 00:59:26 +0000 (0:00:00.562) 0:00:17.582 ***** 2026-02-28 01:01:28.325234 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:01:28.325245 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:01:28.325255 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:01:28.325266 | orchestrator | 2026-02-28 01:01:28.325277 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-28 01:01:28.325288 | orchestrator | Saturday 28 February 2026 00:59:26 +0000 (0:00:00.319) 0:00:17.902 ***** 2026-02-28 01:01:28.325305 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:01:28.325316 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:01:28.325327 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:01:28.325338 | orchestrator | 2026-02-28 01:01:28.325348 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-28 01:01:28.325359 | orchestrator | Saturday 28 February 2026 00:59:27 +0000 (0:00:00.377) 0:00:18.279 ***** 2026-02-28 01:01:28.325370 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:01:28.325381 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:01:28.325392 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:01:28.325434 | orchestrator | 2026-02-28 01:01:28.325447 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-28 01:01:28.325458 | orchestrator | Saturday 28 February 2026 00:59:27 +0000 (0:00:00.350) 0:00:18.629 ***** 2026-02-28 01:01:28.325469 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:01:28.325479 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:01:28.325496 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:01:28.325507 | orchestrator | 2026-02-28 01:01:28.325518 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-28 01:01:28.325529 | orchestrator | Saturday 28 February 2026 00:59:28 +0000 (0:00:00.523) 0:00:19.153 ***** 2026-02-28 01:01:28.325541 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--4d18609e--ecdb--578d--a05b--e7913934f080-osd--block--4d18609e--ecdb--578d--a05b--e7913934f080', 'dm-uuid-LVM-qgrAIOwSnkhw1QWxPjfQy0LnHVk74kox537minsro3qYF1q9x33m0dfTKleDoHvM'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-28 01:01:28.325554 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--dcf33d59--3ae6--5017--b2aa--1b02884ceea7-osd--block--dcf33d59--3ae6--5017--b2aa--1b02884ceea7', 'dm-uuid-LVM-mQcW5Fd3FgXWizSYHN01zaatnwPy7HyWH3DTpYJvJFi2eq4JqpT9LIOS6UR4q7nc'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-28 01:01:28.325566 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 01:01:28.325578 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 01:01:28.325590 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 01:01:28.325601 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 01:01:28.325620 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 01:01:28.325682 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 01:01:28.325701 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 01:01:28.325713 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 01:01:28.325729 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_84e1ce59-bd95-40da-9f03-5819b7d1b103', 'scsi-SQEMU_QEMU_HARDDISK_84e1ce59-bd95-40da-9f03-5819b7d1b103'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_84e1ce59-bd95-40da-9f03-5819b7d1b103-part1', 'scsi-SQEMU_QEMU_HARDDISK_84e1ce59-bd95-40da-9f03-5819b7d1b103-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_84e1ce59-bd95-40da-9f03-5819b7d1b103-part14', 'scsi-SQEMU_QEMU_HARDDISK_84e1ce59-bd95-40da-9f03-5819b7d1b103-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_84e1ce59-bd95-40da-9f03-5819b7d1b103-part15', 'scsi-SQEMU_QEMU_HARDDISK_84e1ce59-bd95-40da-9f03-5819b7d1b103-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_84e1ce59-bd95-40da-9f03-5819b7d1b103-part16', 'scsi-SQEMU_QEMU_HARDDISK_84e1ce59-bd95-40da-9f03-5819b7d1b103-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-28 01:01:28.325756 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--73c4f4bf--6139--5634--9e57--de597eca9964-osd--block--73c4f4bf--6139--5634--9e57--de597eca9964', 'dm-uuid-LVM-WhiGASCrF3mL39HD4JICU92YzJF5yiKVE1Spqe9clI97Bg7oeard2ZXeo9zpd8oz'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-28 01:01:28.325799 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--4d18609e--ecdb--578d--a05b--e7913934f080-osd--block--4d18609e--ecdb--578d--a05b--e7913934f080'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-EpLw32-chyL-yRPv-Nd3g-kw4H-Ai5L-TRM6a3', 'scsi-0QEMU_QEMU_HARDDISK_9a1bcb93-f154-4a17-8f9d-a00d049f4cc1', 'scsi-SQEMU_QEMU_HARDDISK_9a1bcb93-f154-4a17-8f9d-a00d049f4cc1'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-28 01:01:28.325813 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--17f6d453--f54a--57d2--bd55--b12b469b0db8-osd--block--17f6d453--f54a--57d2--bd55--b12b469b0db8', 'dm-uuid-LVM-OPiv2ckmCK2izFfGxciwOHhEGZyxB9cZupaOmhobB5kxnZKwRpRj2hWGH7kjGlBy'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-28 01:01:28.325825 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--dcf33d59--3ae6--5017--b2aa--1b02884ceea7-osd--block--dcf33d59--3ae6--5017--b2aa--1b02884ceea7'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-MkHVfU-Fytw-UqRH-fjRb-Cqoc-lqqg-qkduQV', 'scsi-0QEMU_QEMU_HARDDISK_16fcc6e7-951a-43ed-8f3a-017ae19ace76', 'scsi-SQEMU_QEMU_HARDDISK_16fcc6e7-951a-43ed-8f3a-017ae19ace76'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-28 01:01:28.325837 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 01:01:28.325850 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_afb4b4ce-eec7-46b2-91b5-87577cac503b', 'scsi-SQEMU_QEMU_HARDDISK_afb4b4ce-eec7-46b2-91b5-87577cac503b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-28 01:01:28.325869 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 01:01:28.325881 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-28-00-02-50-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-28 01:01:28.325919 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 01:01:28.325943 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 01:01:28.325955 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 01:01:28.325966 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:01:28.325978 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 01:01:28.325990 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 01:01:28.326001 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 01:01:28.326059 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_85345139-bc47-4fee-b6f9-5fb160253b97', 'scsi-SQEMU_QEMU_HARDDISK_85345139-bc47-4fee-b6f9-5fb160253b97'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_85345139-bc47-4fee-b6f9-5fb160253b97-part1', 'scsi-SQEMU_QEMU_HARDDISK_85345139-bc47-4fee-b6f9-5fb160253b97-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_85345139-bc47-4fee-b6f9-5fb160253b97-part14', 'scsi-SQEMU_QEMU_HARDDISK_85345139-bc47-4fee-b6f9-5fb160253b97-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_85345139-bc47-4fee-b6f9-5fb160253b97-part15', 'scsi-SQEMU_QEMU_HARDDISK_85345139-bc47-4fee-b6f9-5fb160253b97-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_85345139-bc47-4fee-b6f9-5fb160253b97-part16', 'scsi-SQEMU_QEMU_HARDDISK_85345139-bc47-4fee-b6f9-5fb160253b97-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-28 01:01:28.326084 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--73c4f4bf--6139--5634--9e57--de597eca9964-osd--block--73c4f4bf--6139--5634--9e57--de597eca9964'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-eJbIch-EZat-lfeB-Foxv-JJgp-aTG0-8uTQ1P', 'scsi-0QEMU_QEMU_HARDDISK_2388cee9-22a9-4416-93b3-e236454bc031', 'scsi-SQEMU_QEMU_HARDDISK_2388cee9-22a9-4416-93b3-e236454bc031'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-28 01:01:28.326098 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--17f6d453--f54a--57d2--bd55--b12b469b0db8-osd--block--17f6d453--f54a--57d2--bd55--b12b469b0db8'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-1rmCQt-VoW0-sOI6-C15c-CqIq-V4tx-5iix1t', 'scsi-0QEMU_QEMU_HARDDISK_fa3b351c-e54b-439c-bac1-d7e08e27df4b', 'scsi-SQEMU_QEMU_HARDDISK_fa3b351c-e54b-439c-bac1-d7e08e27df4b'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-28 01:01:28.326110 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_151bff65-91b8-4b11-a525-96a3d98709b9', 'scsi-SQEMU_QEMU_HARDDISK_151bff65-91b8-4b11-a525-96a3d98709b9'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-28 01:01:28.326128 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--04fa4cbf--2eb3--5c27--a3dd--f7c2dcd9ac18-osd--block--04fa4cbf--2eb3--5c27--a3dd--f7c2dcd9ac18', 'dm-uuid-LVM-XP4sp69lMwdqwWlMXCLx6v67l4rUhVpWhDvQF6SXe0SHtVySxBy3H9UMbto1Dw5v'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-28 01:01:28.326140 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-28-00-02-41-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-28 01:01:28.326164 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--f45f70cf--4b1a--5b52--bc0a--6a4d28c0a539-osd--block--f45f70cf--4b1a--5b52--bc0a--6a4d28c0a539', 'dm-uuid-LVM-Bt9ZLP0VROEB0wZ4ICpmM7zG2lv1hlPV5pcpsdqHuzg867lux994SUQj9QOymTAs'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-28 01:01:28.326176 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:01:28.326193 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 01:01:28.326205 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 01:01:28.326217 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 01:01:28.326229 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 01:01:28.326241 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 01:01:28.326260 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 01:01:28.326272 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 01:01:28.326283 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 01:01:28.326310 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ea0db2d4-7821-45ce-aa4b-0ff26e9cf878', 'scsi-SQEMU_QEMU_HARDDISK_ea0db2d4-7821-45ce-aa4b-0ff26e9cf878'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ea0db2d4-7821-45ce-aa4b-0ff26e9cf878-part1', 'scsi-SQEMU_QEMU_HARDDISK_ea0db2d4-7821-45ce-aa4b-0ff26e9cf878-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ea0db2d4-7821-45ce-aa4b-0ff26e9cf878-part14', 'scsi-SQEMU_QEMU_HARDDISK_ea0db2d4-7821-45ce-aa4b-0ff26e9cf878-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ea0db2d4-7821-45ce-aa4b-0ff26e9cf878-part15', 'scsi-SQEMU_QEMU_HARDDISK_ea0db2d4-7821-45ce-aa4b-0ff26e9cf878-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ea0db2d4-7821-45ce-aa4b-0ff26e9cf878-part16', 'scsi-SQEMU_QEMU_HARDDISK_ea0db2d4-7821-45ce-aa4b-0ff26e9cf878-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-28 01:01:28.326325 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--04fa4cbf--2eb3--5c27--a3dd--f7c2dcd9ac18-osd--block--04fa4cbf--2eb3--5c27--a3dd--f7c2dcd9ac18'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-uKKsbr-x1Kk-0mgN-OpmP-VMSJ-h0lC-XG4wxU', 'scsi-0QEMU_QEMU_HARDDISK_96cb3389-09b8-4702-8328-a447a406a3bc', 'scsi-SQEMU_QEMU_HARDDISK_96cb3389-09b8-4702-8328-a447a406a3bc'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-28 01:01:28.326343 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--f45f70cf--4b1a--5b52--bc0a--6a4d28c0a539-osd--block--f45f70cf--4b1a--5b52--bc0a--6a4d28c0a539'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-rSfWzk-1CMq-fbaa-7rVi-ULYC-o1bD-yp5IFn', 'scsi-0QEMU_QEMU_HARDDISK_810ccfde-b37c-4538-b69c-a55db736621a', 'scsi-SQEMU_QEMU_HARDDISK_810ccfde-b37c-4538-b69c-a55db736621a'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-28 01:01:28.326361 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_aebfdaae-19e6-4277-9533-aca5f477cfa9', 'scsi-SQEMU_QEMU_HARDDISK_aebfdaae-19e6-4277-9533-aca5f477cfa9'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-28 01:01:28.326390 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-28-00-02-45-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-28 01:01:28.326416 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:01:28.326435 | orchestrator | 2026-02-28 01:01:28.326454 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-28 01:01:28.326472 | orchestrator | Saturday 28 February 2026 00:59:28 +0000 (0:00:00.579) 0:00:19.733 ***** 2026-02-28 01:01:28.326491 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--4d18609e--ecdb--578d--a05b--e7913934f080-osd--block--4d18609e--ecdb--578d--a05b--e7913934f080', 'dm-uuid-LVM-qgrAIOwSnkhw1QWxPjfQy0LnHVk74kox537minsro3qYF1q9x33m0dfTKleDoHvM'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 01:01:28.326505 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--dcf33d59--3ae6--5017--b2aa--1b02884ceea7-osd--block--dcf33d59--3ae6--5017--b2aa--1b02884ceea7', 'dm-uuid-LVM-mQcW5Fd3FgXWizSYHN01zaatnwPy7HyWH3DTpYJvJFi2eq4JqpT9LIOS6UR4q7nc'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 01:01:28.326525 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 01:01:28.326536 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 01:01:28.326548 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 01:01:28.326574 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 01:01:28.326587 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 01:01:28.326599 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 01:01:28.326616 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 01:01:28.326659 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 01:01:28.326671 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--73c4f4bf--6139--5634--9e57--de597eca9964-osd--block--73c4f4bf--6139--5634--9e57--de597eca9964', 'dm-uuid-LVM-WhiGASCrF3mL39HD4JICU92YzJF5yiKVE1Spqe9clI97Bg7oeard2ZXeo9zpd8oz'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 01:01:28.326696 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--17f6d453--f54a--57d2--bd55--b12b469b0db8-osd--block--17f6d453--f54a--57d2--bd55--b12b469b0db8', 'dm-uuid-LVM-OPiv2ckmCK2izFfGxciwOHhEGZyxB9cZupaOmhobB5kxnZKwRpRj2hWGH7kjGlBy'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 01:01:28.326710 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_84e1ce59-bd95-40da-9f03-5819b7d1b103', 'scsi-SQEMU_QEMU_HARDDISK_84e1ce59-bd95-40da-9f03-5819b7d1b103'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_84e1ce59-bd95-40da-9f03-5819b7d1b103-part1', 'scsi-SQEMU_QEMU_HARDDISK_84e1ce59-bd95-40da-9f03-5819b7d1b103-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_84e1ce59-bd95-40da-9f03-5819b7d1b103-part14', 'scsi-SQEMU_QEMU_HARDDISK_84e1ce59-bd95-40da-9f03-5819b7d1b103-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_84e1ce59-bd95-40da-9f03-5819b7d1b103-part15', 'scsi-SQEMU_QEMU_HARDDISK_84e1ce59-bd95-40da-9f03-5819b7d1b103-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_84e1ce59-bd95-40da-9f03-5819b7d1b103-part16', 'scsi-SQEMU_QEMU_HARDDISK_84e1ce59-bd95-40da-9f03-5819b7d1b103-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 01:01:28.326730 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 01:01:28.326748 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--4d18609e--ecdb--578d--a05b--e7913934f080-osd--block--4d18609e--ecdb--578d--a05b--e7913934f080'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-EpLw32-chyL-yRPv-Nd3g-kw4H-Ai5L-TRM6a3', 'scsi-0QEMU_QEMU_HARDDISK_9a1bcb93-f154-4a17-8f9d-a00d049f4cc1', 'scsi-SQEMU_QEMU_HARDDISK_9a1bcb93-f154-4a17-8f9d-a00d049f4cc1'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 01:01:28.326767 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 01:01:28.326780 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--dcf33d59--3ae6--5017--b2aa--1b02884ceea7-osd--block--dcf33d59--3ae6--5017--b2aa--1b02884ceea7'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-MkHVfU-Fytw-UqRH-fjRb-Cqoc-lqqg-qkduQV', 'scsi-0QEMU_QEMU_HARDDISK_16fcc6e7-951a-43ed-8f3a-017ae19ace76', 'scsi-SQEMU_QEMU_HARDDISK_16fcc6e7-951a-43ed-8f3a-017ae19ace76'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 01:01:28.326802 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_afb4b4ce-eec7-46b2-91b5-87577cac503b', 'scsi-SQEMU_QEMU_HARDDISK_afb4b4ce-eec7-46b2-91b5-87577cac503b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 01:01:28.326814 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 01:01:28.326830 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-28-00-02-50-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 01:01:28.326874 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 01:01:28.326901 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:01:28.326920 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 01:01:28.326938 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 01:01:28.326968 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 01:01:28.326988 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--04fa4cbf--2eb3--5c27--a3dd--f7c2dcd9ac18-osd--block--04fa4cbf--2eb3--5c27--a3dd--f7c2dcd9ac18', 'dm-uuid-LVM-XP4sp69lMwdqwWlMXCLx6v67l4rUhVpWhDvQF6SXe0SHtVySxBy3H9UMbto1Dw5v'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 01:01:28.327006 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 01:01:28.327044 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--f45f70cf--4b1a--5b52--bc0a--6a4d28c0a539-osd--block--f45f70cf--4b1a--5b52--bc0a--6a4d28c0a539', 'dm-uuid-LVM-Bt9ZLP0VROEB0wZ4ICpmM7zG2lv1hlPV5pcpsdqHuzg867lux994SUQj9QOymTAs'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 01:01:28.327065 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_85345139-bc47-4fee-b6f9-5fb160253b97', 'scsi-SQEMU_QEMU_HARDDISK_85345139-bc47-4fee-b6f9-5fb160253b97'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_85345139-bc47-4fee-b6f9-5fb160253b97-part1', 'scsi-SQEMU_QEMU_HARDDISK_85345139-bc47-4fee-b6f9-5fb160253b97-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_85345139-bc47-4fee-b6f9-5fb160253b97-part14', 'scsi-SQEMU_QEMU_HARDDISK_85345139-bc47-4fee-b6f9-5fb160253b97-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_85345139-bc47-4fee-b6f9-5fb160253b97-part15', 'scsi-SQEMU_QEMU_HARDDISK_85345139-bc47-4fee-b6f9-5fb160253b97-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_85345139-bc47-4fee-b6f9-5fb160253b97-part16', 'scsi-SQEMU_QEMU_HARDDISK_85345139-bc47-4fee-b6f9-5fb160253b97-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 01:01:28.327096 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 01:01:28.327116 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--73c4f4bf--6139--5634--9e57--de597eca9964-osd--block--73c4f4bf--6139--5634--9e57--de597ec2026-02-28 01:01:28 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:01:28.327145 | orchestrator | a9964'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-eJbIch-EZat-lfeB-Foxv-JJgp-aTG0-8uTQ1P', 'scsi-0QEMU_QEMU_HARDDISK_2388cee9-22a9-4416-93b3-e236454bc031', 'scsi-SQEMU_QEMU_HARDDISK_2388cee9-22a9-4416-93b3-e236454bc031'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 01:01:28.327173 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 01:01:28.327207 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--17f6d453--f54a--57d2--bd55--b12b469b0db8-osd--block--17f6d453--f54a--57d2--bd55--b12b469b0db8'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-1rmCQt-VoW0-sOI6-C15c-CqIq-V4tx-5iix1t', 'scsi-0QEMU_QEMU_HARDDISK_fa3b351c-e54b-439c-bac1-d7e08e27df4b', 'scsi-SQEMU_QEMU_HARDDISK_fa3b351c-e54b-439c-bac1-d7e08e27df4b'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 01:01:28.327226 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 01:01:28.327247 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_151bff65-91b8-4b11-a525-96a3d98709b9', 'scsi-SQEMU_QEMU_HARDDISK_151bff65-91b8-4b11-a525-96a3d98709b9'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 01:01:28.327265 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 01:01:28.327303 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-28-00-02-41-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 01:01:28.327324 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:01:28.327343 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 01:01:28.327375 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 01:01:28.327395 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 01:01:28.327415 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 01:01:28.327457 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ea0db2d4-7821-45ce-aa4b-0ff26e9cf878', 'scsi-SQEMU_QEMU_HARDDISK_ea0db2d4-7821-45ce-aa4b-0ff26e9cf878'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ea0db2d4-7821-45ce-aa4b-0ff26e9cf878-part1', 'scsi-SQEMU_QEMU_HARDDISK_ea0db2d4-7821-45ce-aa4b-0ff26e9cf878-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ea0db2d4-7821-45ce-aa4b-0ff26e9cf878-part14', 'scsi-SQEMU_QEMU_HARDDISK_ea0db2d4-7821-45ce-aa4b-0ff26e9cf878-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ea0db2d4-7821-45ce-aa4b-0ff26e9cf878-part15', 'scsi-SQEMU_QEMU_HARDDISK_ea0db2d4-7821-45ce-aa4b-0ff26e9cf878-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ea0db2d4-7821-45ce-aa4b-0ff26e9cf878-part16', 'scsi-SQEMU_QEMU_HARDDISK_ea0db2d4-7821-45ce-aa4b-0ff26e9cf878-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 01:01:28.327488 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--04fa4cbf--2eb3--5c27--a3dd--f7c2dcd9ac18-osd--block--04fa4cbf--2eb3--5c27--a3dd--f7c2dcd9ac18'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-uKKsbr-x1Kk-0mgN-OpmP-VMSJ-h0lC-XG4wxU', 'scsi-0QEMU_QEMU_HARDDISK_96cb3389-09b8-4702-8328-a447a406a3bc', 'scsi-SQEMU_QEMU_HARDDISK_96cb3389-09b8-4702-8328-a447a406a3bc'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 01:01:28.327508 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--f45f70cf--4b1a--5b52--bc0a--6a4d28c0a539-osd--block--f45f70cf--4b1a--5b52--bc0a--6a4d28c0a539'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-rSfWzk-1CMq-fbaa-7rVi-ULYC-o1bD-yp5IFn', 'scsi-0QEMU_QEMU_HARDDISK_810ccfde-b37c-4538-b69c-a55db736621a', 'scsi-SQEMU_QEMU_HARDDISK_810ccfde-b37c-4538-b69c-a55db736621a'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 01:01:28.327528 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_aebfdaae-19e6-4277-9533-aca5f477cfa9', 'scsi-SQEMU_QEMU_HARDDISK_aebfdaae-19e6-4277-9533-aca5f477cfa9'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 01:01:28.327565 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-28-00-02-45-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 01:01:28.327596 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:01:28.327773 | orchestrator | 2026-02-28 01:01:28.327794 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-28 01:01:28.327811 | orchestrator | Saturday 28 February 2026 00:59:29 +0000 (0:00:00.681) 0:00:20.414 ***** 2026-02-28 01:01:28.327828 | orchestrator | ok: [testbed-node-3] 2026-02-28 01:01:28.327846 | orchestrator | ok: [testbed-node-4] 2026-02-28 01:01:28.327865 | orchestrator | ok: [testbed-node-5] 2026-02-28 01:01:28.327885 | orchestrator | 2026-02-28 01:01:28.327904 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-28 01:01:28.327921 | orchestrator | Saturday 28 February 2026 00:59:30 +0000 (0:00:00.734) 0:00:21.149 ***** 2026-02-28 01:01:28.327938 | orchestrator | ok: [testbed-node-3] 2026-02-28 01:01:28.327955 | orchestrator | ok: [testbed-node-4] 2026-02-28 01:01:28.327973 | orchestrator | ok: [testbed-node-5] 2026-02-28 01:01:28.327992 | orchestrator | 2026-02-28 01:01:28.328009 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-28 01:01:28.328028 | orchestrator | Saturday 28 February 2026 00:59:30 +0000 (0:00:00.606) 0:00:21.755 ***** 2026-02-28 01:01:28.328046 | orchestrator | ok: [testbed-node-3] 2026-02-28 01:01:28.328064 | orchestrator | ok: [testbed-node-4] 2026-02-28 01:01:28.328082 | orchestrator | ok: [testbed-node-5] 2026-02-28 01:01:28.328100 | orchestrator | 2026-02-28 01:01:28.328118 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-28 01:01:28.328138 | orchestrator | Saturday 28 February 2026 00:59:31 +0000 (0:00:00.641) 0:00:22.397 ***** 2026-02-28 01:01:28.328156 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:01:28.328175 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:01:28.328194 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:01:28.328212 | orchestrator | 2026-02-28 01:01:28.328231 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-28 01:01:28.328251 | orchestrator | Saturday 28 February 2026 00:59:31 +0000 (0:00:00.364) 0:00:22.762 ***** 2026-02-28 01:01:28.328268 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:01:28.328286 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:01:28.328303 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:01:28.328319 | orchestrator | 2026-02-28 01:01:28.328333 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-28 01:01:28.328347 | orchestrator | Saturday 28 February 2026 00:59:32 +0000 (0:00:00.502) 0:00:23.264 ***** 2026-02-28 01:01:28.328362 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:01:28.328378 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:01:28.328394 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:01:28.328411 | orchestrator | 2026-02-28 01:01:28.328428 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-28 01:01:28.328443 | orchestrator | Saturday 28 February 2026 00:59:32 +0000 (0:00:00.574) 0:00:23.839 ***** 2026-02-28 01:01:28.328457 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-02-28 01:01:28.328471 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-02-28 01:01:28.328485 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-02-28 01:01:28.328501 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-02-28 01:01:28.328516 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-02-28 01:01:28.328532 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-02-28 01:01:28.328547 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-02-28 01:01:28.328563 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-02-28 01:01:28.328581 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-02-28 01:01:28.328596 | orchestrator | 2026-02-28 01:01:28.328612 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-28 01:01:28.328725 | orchestrator | Saturday 28 February 2026 00:59:34 +0000 (0:00:01.235) 0:00:25.075 ***** 2026-02-28 01:01:28.328745 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-02-28 01:01:28.328763 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-02-28 01:01:28.328778 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-02-28 01:01:28.328795 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:01:28.328811 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-02-28 01:01:28.328826 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-02-28 01:01:28.328842 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-02-28 01:01:28.328860 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:01:28.328875 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-02-28 01:01:28.328890 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-02-28 01:01:28.328909 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-02-28 01:01:28.328927 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:01:28.328943 | orchestrator | 2026-02-28 01:01:28.328960 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-28 01:01:28.328977 | orchestrator | Saturday 28 February 2026 00:59:34 +0000 (0:00:00.421) 0:00:25.496 ***** 2026-02-28 01:01:28.328996 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-28 01:01:28.329014 | orchestrator | 2026-02-28 01:01:28.329031 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-28 01:01:28.329171 | orchestrator | Saturday 28 February 2026 00:59:35 +0000 (0:00:00.831) 0:00:26.327 ***** 2026-02-28 01:01:28.329198 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:01:28.329214 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:01:28.329228 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:01:28.329242 | orchestrator | 2026-02-28 01:01:28.329267 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-28 01:01:28.329282 | orchestrator | Saturday 28 February 2026 00:59:35 +0000 (0:00:00.377) 0:00:26.705 ***** 2026-02-28 01:01:28.329296 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:01:28.329309 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:01:28.329323 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:01:28.329336 | orchestrator | 2026-02-28 01:01:28.329349 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-28 01:01:28.329363 | orchestrator | Saturday 28 February 2026 00:59:36 +0000 (0:00:00.339) 0:00:27.044 ***** 2026-02-28 01:01:28.329377 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:01:28.329390 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:01:28.329405 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:01:28.329418 | orchestrator | 2026-02-28 01:01:28.329432 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-28 01:01:28.329445 | orchestrator | Saturday 28 February 2026 00:59:36 +0000 (0:00:00.317) 0:00:27.362 ***** 2026-02-28 01:01:28.329459 | orchestrator | ok: [testbed-node-3] 2026-02-28 01:01:28.329473 | orchestrator | ok: [testbed-node-4] 2026-02-28 01:01:28.329486 | orchestrator | ok: [testbed-node-5] 2026-02-28 01:01:28.329553 | orchestrator | 2026-02-28 01:01:28.329568 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-28 01:01:28.329581 | orchestrator | Saturday 28 February 2026 00:59:37 +0000 (0:00:00.930) 0:00:28.292 ***** 2026-02-28 01:01:28.329595 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-28 01:01:28.329608 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-28 01:01:28.329623 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-28 01:01:28.329661 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:01:28.329674 | orchestrator | 2026-02-28 01:01:28.329687 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-28 01:01:28.329721 | orchestrator | Saturday 28 February 2026 00:59:37 +0000 (0:00:00.386) 0:00:28.678 ***** 2026-02-28 01:01:28.329735 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-28 01:01:28.329748 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-28 01:01:28.329761 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-28 01:01:28.329773 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:01:28.329785 | orchestrator | 2026-02-28 01:01:28.329797 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-28 01:01:28.329809 | orchestrator | Saturday 28 February 2026 00:59:38 +0000 (0:00:00.401) 0:00:29.080 ***** 2026-02-28 01:01:28.329821 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-28 01:01:28.329834 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-28 01:01:28.329847 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-28 01:01:28.329860 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:01:28.329873 | orchestrator | 2026-02-28 01:01:28.329886 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-28 01:01:28.329900 | orchestrator | Saturday 28 February 2026 00:59:38 +0000 (0:00:00.457) 0:00:29.537 ***** 2026-02-28 01:01:28.329913 | orchestrator | ok: [testbed-node-3] 2026-02-28 01:01:28.329927 | orchestrator | ok: [testbed-node-4] 2026-02-28 01:01:28.329940 | orchestrator | ok: [testbed-node-5] 2026-02-28 01:01:28.329955 | orchestrator | 2026-02-28 01:01:28.329969 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-28 01:01:28.329982 | orchestrator | Saturday 28 February 2026 00:59:39 +0000 (0:00:00.417) 0:00:29.955 ***** 2026-02-28 01:01:28.329995 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-02-28 01:01:28.330009 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-02-28 01:01:28.330066 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-02-28 01:01:28.330081 | orchestrator | 2026-02-28 01:01:28.330095 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-28 01:01:28.330110 | orchestrator | Saturday 28 February 2026 00:59:39 +0000 (0:00:00.565) 0:00:30.520 ***** 2026-02-28 01:01:28.330124 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-28 01:01:28.330139 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-28 01:01:28.330154 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-28 01:01:28.330169 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-02-28 01:01:28.330183 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-28 01:01:28.330198 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-28 01:01:28.330211 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-28 01:01:28.330226 | orchestrator | 2026-02-28 01:01:28.330240 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-28 01:01:28.330254 | orchestrator | Saturday 28 February 2026 00:59:40 +0000 (0:00:01.122) 0:00:31.643 ***** 2026-02-28 01:01:28.330268 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-28 01:01:28.330281 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-28 01:01:28.330294 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-28 01:01:28.330308 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-02-28 01:01:28.330321 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-28 01:01:28.330351 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-28 01:01:28.330365 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-28 01:01:28.330394 | orchestrator | 2026-02-28 01:01:28.330409 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2026-02-28 01:01:28.330433 | orchestrator | Saturday 28 February 2026 00:59:42 +0000 (0:00:02.189) 0:00:33.833 ***** 2026-02-28 01:01:28.330448 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:01:28.330462 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:01:28.330476 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2026-02-28 01:01:28.330489 | orchestrator | 2026-02-28 01:01:28.330503 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2026-02-28 01:01:28.330517 | orchestrator | Saturday 28 February 2026 00:59:43 +0000 (0:00:00.403) 0:00:34.236 ***** 2026-02-28 01:01:28.330533 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-02-28 01:01:28.330550 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-02-28 01:01:28.330565 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-02-28 01:01:28.330580 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-02-28 01:01:28.330595 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-02-28 01:01:28.330609 | orchestrator | 2026-02-28 01:01:28.330622 | orchestrator | TASK [generate keys] *********************************************************** 2026-02-28 01:01:28.330659 | orchestrator | Saturday 28 February 2026 01:00:30 +0000 (0:00:47.602) 0:01:21.839 ***** 2026-02-28 01:01:28.330674 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-28 01:01:28.330688 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-28 01:01:28.330702 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-28 01:01:28.330715 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-28 01:01:28.330729 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-28 01:01:28.330742 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-28 01:01:28.330755 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2026-02-28 01:01:28.330768 | orchestrator | 2026-02-28 01:01:28.330782 | orchestrator | TASK [get keys from monitors] ************************************************** 2026-02-28 01:01:28.330796 | orchestrator | Saturday 28 February 2026 01:00:56 +0000 (0:00:25.919) 0:01:47.759 ***** 2026-02-28 01:01:28.330810 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-28 01:01:28.330823 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-28 01:01:28.330836 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-28 01:01:28.330849 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-28 01:01:28.330876 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-28 01:01:28.330891 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-28 01:01:28.330904 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-28 01:01:28.330917 | orchestrator | 2026-02-28 01:01:28.330931 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2026-02-28 01:01:28.330945 | orchestrator | Saturday 28 February 2026 01:01:09 +0000 (0:00:12.389) 0:02:00.148 ***** 2026-02-28 01:01:28.330958 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-28 01:01:28.330972 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-28 01:01:28.330985 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-28 01:01:28.330998 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-28 01:01:28.331021 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-28 01:01:28.331031 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-28 01:01:28.331053 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-28 01:01:28.331067 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-28 01:01:28.331080 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-28 01:01:28.331092 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-28 01:01:28.331104 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-28 01:01:28.331117 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-28 01:01:28.331131 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-28 01:01:28.331145 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-28 01:01:28.331159 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-28 01:01:28.331172 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-28 01:01:28.331185 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-28 01:01:28.331199 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-28 01:01:28.331212 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2026-02-28 01:01:28.331225 | orchestrator | 2026-02-28 01:01:28.331239 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-28 01:01:28.331252 | orchestrator | testbed-node-3 : ok=25  changed=0 unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-02-28 01:01:28.331267 | orchestrator | testbed-node-4 : ok=18  changed=0 unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-02-28 01:01:28.331281 | orchestrator | testbed-node-5 : ok=23  changed=3  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-02-28 01:01:28.331294 | orchestrator | 2026-02-28 01:01:28.331308 | orchestrator | 2026-02-28 01:01:28.331322 | orchestrator | 2026-02-28 01:01:28.331335 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-28 01:01:28.331349 | orchestrator | Saturday 28 February 2026 01:01:27 +0000 (0:00:18.070) 0:02:18.219 ***** 2026-02-28 01:01:28.331358 | orchestrator | =============================================================================== 2026-02-28 01:01:28.331366 | orchestrator | create openstack pool(s) ----------------------------------------------- 47.60s 2026-02-28 01:01:28.331374 | orchestrator | generate keys ---------------------------------------------------------- 25.92s 2026-02-28 01:01:28.331382 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 18.07s 2026-02-28 01:01:28.331397 | orchestrator | get keys from monitors ------------------------------------------------- 12.39s 2026-02-28 01:01:28.331405 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 2.61s 2026-02-28 01:01:28.331413 | orchestrator | ceph-facts : Set_fact ceph_admin_command -------------------------------- 2.19s 2026-02-28 01:01:28.331421 | orchestrator | ceph-facts : Get current fsid if cluster is already running ------------- 1.92s 2026-02-28 01:01:28.331430 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 1.24s 2026-02-28 01:01:28.331439 | orchestrator | ceph-facts : Check if podman binary is present -------------------------- 1.16s 2026-02-28 01:01:28.331447 | orchestrator | ceph-facts : Set_fact ceph_run_cmd -------------------------------------- 1.12s 2026-02-28 01:01:28.331455 | orchestrator | ceph-facts : Check if the ceph mon socket is in-use --------------------- 0.96s 2026-02-28 01:01:28.331463 | orchestrator | ceph-facts : Set_fact _radosgw_address to radosgw_address --------------- 0.93s 2026-02-28 01:01:28.331471 | orchestrator | ceph-facts : Import_tasks set_radosgw_address.yml ----------------------- 0.83s 2026-02-28 01:01:28.331478 | orchestrator | ceph-facts : Include facts.yml ------------------------------------------ 0.80s 2026-02-28 01:01:28.331486 | orchestrator | ceph-facts : Set_fact monitor_name ansible_facts['hostname'] ------------ 0.76s 2026-02-28 01:01:28.331495 | orchestrator | ceph-facts : Check if the ceph conf exists ------------------------------ 0.73s 2026-02-28 01:01:28.331503 | orchestrator | ceph-facts : Check if it is atomic host --------------------------------- 0.72s 2026-02-28 01:01:28.331511 | orchestrator | ceph-facts : Check for a ceph mon socket -------------------------------- 0.71s 2026-02-28 01:01:28.331519 | orchestrator | ceph-facts : Set_fact devices generate device list when osd_auto_discovery --- 0.68s 2026-02-28 01:01:28.331526 | orchestrator | ceph-facts : Read osd pool default crush rule --------------------------- 0.64s 2026-02-28 01:01:31.368727 | orchestrator | 2026-02-28 01:01:31 | INFO  | Task ed7b2359-c4a4-4150-882c-a92b5a1409ee is in state STARTED 2026-02-28 01:01:31.370256 | orchestrator | 2026-02-28 01:01:31 | INFO  | Task aea08acf-24af-43d9-959c-d6ef4f5ec0b4 is in state STARTED 2026-02-28 01:01:31.371167 | orchestrator | 2026-02-28 01:01:31 | INFO  | Task ade46703-9859-4d9b-9981-77970d56ece4 is in state STARTED 2026-02-28 01:01:31.372227 | orchestrator | 2026-02-28 01:01:31 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:01:34.417827 | orchestrator | 2026-02-28 01:01:34 | INFO  | Task ed7b2359-c4a4-4150-882c-a92b5a1409ee is in state STARTED 2026-02-28 01:01:34.421121 | orchestrator | 2026-02-28 01:01:34 | INFO  | Task aea08acf-24af-43d9-959c-d6ef4f5ec0b4 is in state STARTED 2026-02-28 01:01:34.422380 | orchestrator | 2026-02-28 01:01:34 | INFO  | Task ade46703-9859-4d9b-9981-77970d56ece4 is in state STARTED 2026-02-28 01:01:34.422454 | orchestrator | 2026-02-28 01:01:34 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:01:37.455781 | orchestrator | 2026-02-28 01:01:37 | INFO  | Task ed7b2359-c4a4-4150-882c-a92b5a1409ee is in state STARTED 2026-02-28 01:01:37.456095 | orchestrator | 2026-02-28 01:01:37 | INFO  | Task aea08acf-24af-43d9-959c-d6ef4f5ec0b4 is in state STARTED 2026-02-28 01:01:37.456845 | orchestrator | 2026-02-28 01:01:37 | INFO  | Task ade46703-9859-4d9b-9981-77970d56ece4 is in state STARTED 2026-02-28 01:01:37.456881 | orchestrator | 2026-02-28 01:01:37 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:01:40.489936 | orchestrator | 2026-02-28 01:01:40 | INFO  | Task ed7b2359-c4a4-4150-882c-a92b5a1409ee is in state STARTED 2026-02-28 01:01:40.491029 | orchestrator | 2026-02-28 01:01:40 | INFO  | Task aea08acf-24af-43d9-959c-d6ef4f5ec0b4 is in state STARTED 2026-02-28 01:01:40.492472 | orchestrator | 2026-02-28 01:01:40 | INFO  | Task ade46703-9859-4d9b-9981-77970d56ece4 is in state STARTED 2026-02-28 01:01:40.492512 | orchestrator | 2026-02-28 01:01:40 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:01:43.524095 | orchestrator | 2026-02-28 01:01:43 | INFO  | Task ed7b2359-c4a4-4150-882c-a92b5a1409ee is in state STARTED 2026-02-28 01:01:43.524427 | orchestrator | 2026-02-28 01:01:43 | INFO  | [1mTask aea08acf-24af-43d9-959c-d6ef4f5ec0b4 is in state STARTED 2026-02-28 01:01:43.525580 | orchestrator | 2026-02-28 01:01:43 | INFO  | Task ade46703-9859-4d9b-9981-77970d56ece4 is in state STARTED 2026-02-28 01:01:43.525618 | orchestrator | 2026-02-28 01:01:43 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:01:46.565236 | orchestrator | 2026-02-28 01:01:46 | INFO  | Task ed7b2359-c4a4-4150-882c-a92b5a1409ee is in state STARTED 2026-02-28 01:01:46.568177 | orchestrator | 2026-02-28 01:01:46 | INFO  | Task aea08acf-24af-43d9-959c-d6ef4f5ec0b4 is in state STARTED 2026-02-28 01:01:46.572364 | orchestrator | 2026-02-28 01:01:46 | INFO  | Task ade46703-9859-4d9b-9981-77970d56ece4 is in state STARTED 2026-02-28 01:01:46.572457 | orchestrator | 2026-02-28 01:01:46 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:01:49.611808 | orchestrator | 2026-02-28 01:01:49 | INFO  | Task ed7b2359-c4a4-4150-882c-a92b5a1409ee is in state STARTED 2026-02-28 01:01:49.613119 | orchestrator | 2026-02-28 01:01:49 | INFO  | Task aea08acf-24af-43d9-959c-d6ef4f5ec0b4 is in state STARTED 2026-02-28 01:01:49.618208 | orchestrator | 2026-02-28 01:01:49 | INFO  | Task ade46703-9859-4d9b-9981-77970d56ece4 is in state STARTED 2026-02-28 01:01:49.618294 | orchestrator | 2026-02-28 01:01:49 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:01:52.661069 | orchestrator | 2026-02-28 01:01:52 | INFO  | Task ed7b2359-c4a4-4150-882c-a92b5a1409ee is in state STARTED 2026-02-28 01:01:52.661751 | orchestrator | 2026-02-28 01:01:52 | INFO  | Task aea08acf-24af-43d9-959c-d6ef4f5ec0b4 is in state STARTED 2026-02-28 01:01:52.662460 | orchestrator | 2026-02-28 01:01:52 | INFO  | Task ade46703-9859-4d9b-9981-77970d56ece4 is in state STARTED 2026-02-28 01:01:52.662518 | orchestrator | 2026-02-28 01:01:52 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:01:55.710264 | orchestrator | 2026-02-28 01:01:55 | INFO  | Task ed7b2359-c4a4-4150-882c-a92b5a1409ee is in state STARTED 2026-02-28 01:01:55.712850 | orchestrator | 2026-02-28 01:01:55 | INFO  | Task aea08acf-24af-43d9-959c-d6ef4f5ec0b4 is in state STARTED 2026-02-28 01:01:55.713693 | orchestrator | 2026-02-28 01:01:55 | INFO  | Task ade46703-9859-4d9b-9981-77970d56ece4 is in state STARTED 2026-02-28 01:01:55.713764 | orchestrator | 2026-02-28 01:01:55 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:01:58.760960 | orchestrator | 2026-02-28 01:01:58 | INFO  | Task ed7b2359-c4a4-4150-882c-a92b5a1409ee is in state STARTED 2026-02-28 01:01:58.761181 | orchestrator | 2026-02-28 01:01:58 | INFO  | Task aea08acf-24af-43d9-959c-d6ef4f5ec0b4 is in state STARTED 2026-02-28 01:01:58.762214 | orchestrator | 2026-02-28 01:01:58 | INFO  | Task ade46703-9859-4d9b-9981-77970d56ece4 is in state STARTED 2026-02-28 01:01:58.762274 | orchestrator | 2026-02-28 01:01:58 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:02:01.818309 | orchestrator | 2026-02-28 01:02:01 | INFO  | Task ed7b2359-c4a4-4150-882c-a92b5a1409ee is in state STARTED 2026-02-28 01:02:01.820968 | orchestrator | 2026-02-28 01:02:01 | INFO  | Task aea08acf-24af-43d9-959c-d6ef4f5ec0b4 is in state STARTED 2026-02-28 01:02:01.823052 | orchestrator | 2026-02-28 01:02:01 | INFO  | Task ade46703-9859-4d9b-9981-77970d56ece4 is in state STARTED 2026-02-28 01:02:01.823319 | orchestrator | 2026-02-28 01:02:01 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:02:04.857203 | orchestrator | 2026-02-28 01:02:04 | INFO  | Task ed7b2359-c4a4-4150-882c-a92b5a1409ee is in state STARTED 2026-02-28 01:02:04.858834 | orchestrator | 2026-02-28 01:02:04 | INFO  | Task aea08acf-24af-43d9-959c-d6ef4f5ec0b4 is in state STARTED 2026-02-28 01:02:04.862560 | orchestrator | 2026-02-28 01:02:04 | INFO  | Task ade46703-9859-4d9b-9981-77970d56ece4 is in state STARTED 2026-02-28 01:02:04.863337 | orchestrator | 2026-02-28 01:02:04 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:02:07.917367 | orchestrator | 2026-02-28 01:02:07 | INFO  | Task ed7b2359-c4a4-4150-882c-a92b5a1409ee is in state STARTED 2026-02-28 01:02:07.919931 | orchestrator | 2026-02-28 01:02:07 | INFO  | Task aea08acf-24af-43d9-959c-d6ef4f5ec0b4 is in state STARTED 2026-02-28 01:02:07.921895 | orchestrator | 2026-02-28 01:02:07 | INFO  | Task ade46703-9859-4d9b-9981-77970d56ece4 is in state STARTED 2026-02-28 01:02:07.921982 | orchestrator | 2026-02-28 01:02:07 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:02:10.970889 | orchestrator | 2026-02-28 01:02:10 | INFO  | Task ed7b2359-c4a4-4150-882c-a92b5a1409ee is in state STARTED 2026-02-28 01:02:10.971520 | orchestrator | 2026-02-28 01:02:10 | INFO  | Task aea08acf-24af-43d9-959c-d6ef4f5ec0b4 is in state SUCCESS 2026-02-28 01:02:10.974304 | orchestrator | 2026-02-28 01:02:10 | INFO  | Task ade46703-9859-4d9b-9981-77970d56ece4 is in state STARTED 2026-02-28 01:02:10.976553 | orchestrator | 2026-02-28 01:02:10 | INFO  | Task 45b5cd2f-b896-47c6-82a0-58e2e55f0d39 is in state STARTED 2026-02-28 01:02:10.976624 | orchestrator | 2026-02-28 01:02:10 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:02:14.022338 | orchestrator | 2026-02-28 01:02:14 | INFO  | Task ed7b2359-c4a4-4150-882c-a92b5a1409ee is in state STARTED 2026-02-28 01:02:14.023231 | orchestrator | 2026-02-28 01:02:14 | INFO  | Task ade46703-9859-4d9b-9981-77970d56ece4 is in state STARTED 2026-02-28 01:02:14.024173 | orchestrator | 2026-02-28 01:02:14 | INFO  | Task 45b5cd2f-b896-47c6-82a0-58e2e55f0d39 is in state STARTED 2026-02-28 01:02:14.024206 | orchestrator | 2026-02-28 01:02:14 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:02:17.078997 | orchestrator | 2026-02-28 01:02:17 | INFO  | Task ed7b2359-c4a4-4150-882c-a92b5a1409ee is in state STARTED 2026-02-28 01:02:17.079485 | orchestrator | 2026-02-28 01:02:17 | INFO  | Task ade46703-9859-4d9b-9981-77970d56ece4 is in state STARTED 2026-02-28 01:02:17.080846 | orchestrator | 2026-02-28 01:02:17 | INFO  | Task 45b5cd2f-b896-47c6-82a0-58e2e55f0d39 is in state STARTED 2026-02-28 01:02:17.080898 | orchestrator | 2026-02-28 01:02:17 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:02:20.128924 | orchestrator | 2026-02-28 01:02:20 | INFO  | Task ed7b2359-c4a4-4150-882c-a92b5a1409ee is in state STARTED 2026-02-28 01:02:20.131423 | orchestrator | 2026-02-28 01:02:20 | INFO  | Task ade46703-9859-4d9b-9981-77970d56ece4 is in state STARTED 2026-02-28 01:02:20.133853 | orchestrator | 2026-02-28 01:02:20 | INFO  | Task 45b5cd2f-b896-47c6-82a0-58e2e55f0d39 is in state STARTED 2026-02-28 01:02:20.133896 | orchestrator | 2026-02-28 01:02:20 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:02:23.187792 | orchestrator | 2026-02-28 01:02:23 | INFO  | Task ed7b2359-c4a4-4150-882c-a92b5a1409ee is in state STARTED 2026-02-28 01:02:23.189192 | orchestrator | 2026-02-28 01:02:23 | INFO  | Task ade46703-9859-4d9b-9981-77970d56ece4 is in state STARTED 2026-02-28 01:02:23.192302 | orchestrator | 2026-02-28 01:02:23 | INFO  | Task 45b5cd2f-b896-47c6-82a0-58e2e55f0d39 is in state STARTED 2026-02-28 01:02:23.192353 | orchestrator | 2026-02-28 01:02:23 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:02:26.233550 | orchestrator | 2026-02-28 01:02:26 | INFO  | Task ed7b2359-c4a4-4150-882c-a92b5a1409ee is in state STARTED 2026-02-28 01:02:26.234791 | orchestrator | 2026-02-28 01:02:26 | INFO  | Task ade46703-9859-4d9b-9981-77970d56ece4 is in state STARTED 2026-02-28 01:02:26.236715 | orchestrator | 2026-02-28 01:02:26 | INFO  | Task 45b5cd2f-b896-47c6-82a0-58e2e55f0d39 is in state STARTED 2026-02-28 01:02:26.236768 | orchestrator | 2026-02-28 01:02:26 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:02:29.283952 | orchestrator | 2026-02-28 01:02:29 | INFO  | Task ed7b2359-c4a4-4150-882c-a92b5a1409ee is in state STARTED 2026-02-28 01:02:29.286146 | orchestrator | 2026-02-28 01:02:29 | INFO  | Task ade46703-9859-4d9b-9981-77970d56ece4 is in state STARTED 2026-02-28 01:02:29.287587 | orchestrator | 2026-02-28 01:02:29 | INFO  | Task 45b5cd2f-b896-47c6-82a0-58e2e55f0d39 is in state STARTED 2026-02-28 01:02:29.287736 | orchestrator | 2026-02-28 01:02:29 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:02:32.336256 | orchestrator | 2026-02-28 01:02:32 | INFO  | Task ed7b2359-c4a4-4150-882c-a92b5a1409ee is in state STARTED 2026-02-28 01:02:32.338467 | orchestrator | 2026-02-28 01:02:32 | INFO  | Task ade46703-9859-4d9b-9981-77970d56ece4 is in state STARTED 2026-02-28 01:02:32.341732 | orchestrator | 2026-02-28 01:02:32 | INFO  | Task 45b5cd2f-b896-47c6-82a0-58e2e55f0d39 is in state STARTED 2026-02-28 01:02:32.341828 | orchestrator | 2026-02-28 01:02:32 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:02:35.383998 | orchestrator | 2026-02-28 01:02:35 | INFO  | Task ed7b2359-c4a4-4150-882c-a92b5a1409ee is in state STARTED 2026-02-28 01:02:35.385881 | orchestrator | 2026-02-28 01:02:35 | INFO  | Task ade46703-9859-4d9b-9981-77970d56ece4 is in state STARTED 2026-02-28 01:02:35.387756 | orchestrator | 2026-02-28 01:02:35 | INFO  | Task 45b5cd2f-b896-47c6-82a0-58e2e55f0d39 is in state STARTED 2026-02-28 01:02:35.387821 | orchestrator | 2026-02-28 01:02:35 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:02:38.426201 | orchestrator | 2026-02-28 01:02:38 | INFO  | Task ed7b2359-c4a4-4150-882c-a92b5a1409ee is in state STARTED 2026-02-28 01:02:38.427593 | orchestrator | 2026-02-28 01:02:38 | INFO  | Task ade46703-9859-4d9b-9981-77970d56ece4 is in state STARTED 2026-02-28 01:02:38.428832 | orchestrator | 2026-02-28 01:02:38 | INFO  | Task 45b5cd2f-b896-47c6-82a0-58e2e55f0d39 is in state STARTED 2026-02-28 01:02:38.428876 | orchestrator | 2026-02-28 01:02:38 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:02:41.489545 | orchestrator | 2026-02-28 01:02:41 | INFO  | Task ed7b2359-c4a4-4150-882c-a92b5a1409ee is in state STARTED 2026-02-28 01:02:41.489623 | orchestrator | 2026-02-28 01:02:41 | INFO  | Task ade46703-9859-4d9b-9981-77970d56ece4 is in state STARTED 2026-02-28 01:02:41.491115 | orchestrator | 2026-02-28 01:02:41 | INFO  | Task 45b5cd2f-b896-47c6-82a0-58e2e55f0d39 is in state STARTED 2026-02-28 01:02:41.491150 | orchestrator | 2026-02-28 01:02:41 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:02:44.540846 | orchestrator | 2026-02-28 01:02:44 | INFO  | Task ed7b2359-c4a4-4150-882c-a92b5a1409ee is in state STARTED 2026-02-28 01:02:44.542821 | orchestrator | 2026-02-28 01:02:44 | INFO  | Task ade46703-9859-4d9b-9981-77970d56ece4 is in state STARTED 2026-02-28 01:02:44.545411 | orchestrator | 2026-02-28 01:02:44 | INFO  | Task 45b5cd2f-b896-47c6-82a0-58e2e55f0d39 is in state STARTED 2026-02-28 01:02:44.545464 | orchestrator | 2026-02-28 01:02:44 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:02:47.597515 | orchestrator | 2026-02-28 01:02:47 | INFO  | Task ed7b2359-c4a4-4150-882c-a92b5a1409ee is in state STARTED 2026-02-28 01:02:47.600381 | orchestrator | 2026-02-28 01:02:47 | INFO  | Task ade46703-9859-4d9b-9981-77970d56ece4 is in state STARTED 2026-02-28 01:02:47.602948 | orchestrator | 2026-02-28 01:02:47 | INFO  | Task 45b5cd2f-b896-47c6-82a0-58e2e55f0d39 is in state STARTED 2026-02-28 01:02:47.602987 | orchestrator | 2026-02-28 01:02:47 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:02:50.644524 | orchestrator | 2026-02-28 01:02:50 | INFO  | Task ed7b2359-c4a4-4150-882c-a92b5a1409ee is in state STARTED 2026-02-28 01:02:50.645874 | orchestrator | 2026-02-28 01:02:50 | INFO  | Task ade46703-9859-4d9b-9981-77970d56ece4 is in state STARTED 2026-02-28 01:02:50.647063 | orchestrator | 2026-02-28 01:02:50 | INFO  | Task 45b5cd2f-b896-47c6-82a0-58e2e55f0d39 is in state STARTED 2026-02-28 01:02:50.647128 | orchestrator | 2026-02-28 01:02:50 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:02:53.683966 | orchestrator | 2026-02-28 01:02:53 | INFO  | Task ed7b2359-c4a4-4150-882c-a92b5a1409ee is in state STARTED 2026-02-28 01:02:53.687264 | orchestrator | 2026-02-28 01:02:53 | INFO  | Task ade46703-9859-4d9b-9981-77970d56ece4 is in state STARTED 2026-02-28 01:02:53.689659 | orchestrator | 2026-02-28 01:02:53 | INFO  | Task 45b5cd2f-b896-47c6-82a0-58e2e55f0d39 is in state STARTED 2026-02-28 01:02:53.689826 | orchestrator | 2026-02-28 01:02:53 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:02:56.732407 | orchestrator | 2026-02-28 01:02:56 | INFO  | Task ed7b2359-c4a4-4150-882c-a92b5a1409ee is in state STARTED 2026-02-28 01:02:56.733871 | orchestrator | 2026-02-28 01:02:56 | INFO  | Task ade46703-9859-4d9b-9981-77970d56ece4 is in state STARTED 2026-02-28 01:02:56.736242 | orchestrator | 2026-02-28 01:02:56 | INFO  | Task 45b5cd2f-b896-47c6-82a0-58e2e55f0d39 is in state STARTED 2026-02-28 01:02:56.736527 | orchestrator | 2026-02-28 01:02:56 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:02:59.780570 | orchestrator | 2026-02-28 01:02:59 | INFO  | Task ed7b2359-c4a4-4150-882c-a92b5a1409ee is in state STARTED 2026-02-28 01:02:59.781509 | orchestrator | 2026-02-28 01:02:59 | INFO  | Task ade46703-9859-4d9b-9981-77970d56ece4 is in state STARTED 2026-02-28 01:02:59.782397 | orchestrator | 2026-02-28 01:02:59 | INFO  | Task 45b5cd2f-b896-47c6-82a0-58e2e55f0d39 is in state STARTED 2026-02-28 01:02:59.782438 | orchestrator | 2026-02-28 01:02:59 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:03:02.825854 | orchestrator | 2026-02-28 01:03:02 | INFO  | Task ed7b2359-c4a4-4150-882c-a92b5a1409ee is in state STARTED 2026-02-28 01:03:02.825928 | orchestrator | 2026-02-28 01:03:02 | INFO  | Task ade46703-9859-4d9b-9981-77970d56ece4 is in state STARTED 2026-02-28 01:03:02.825934 | orchestrator | 2026-02-28 01:03:02 | INFO  | Task 45b5cd2f-b896-47c6-82a0-58e2e55f0d39 is in state STARTED 2026-02-28 01:03:02.825939 | orchestrator | 2026-02-28 01:03:02 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:03:05.863274 | orchestrator | 2026-02-28 01:03:05 | INFO  | Task ed7b2359-c4a4-4150-882c-a92b5a1409ee is in state STARTED 2026-02-28 01:03:05.865204 | orchestrator | 2026-02-28 01:03:05 | INFO  | Task ade46703-9859-4d9b-9981-77970d56ece4 is in state STARTED 2026-02-28 01:03:05.866383 | orchestrator | 2026-02-28 01:03:05 | INFO  | Task 45b5cd2f-b896-47c6-82a0-58e2e55f0d39 is in state STARTED 2026-02-28 01:03:05.866416 | orchestrator | 2026-02-28 01:03:05 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:03:08.923936 | orchestrator | 2026-02-28 01:03:08 | INFO  | Task ed7b2359-c4a4-4150-882c-a92b5a1409ee is in state STARTED 2026-02-28 01:03:08.925463 | orchestrator | 2026-02-28 01:03:08 | INFO  | Task ade46703-9859-4d9b-9981-77970d56ece4 is in state STARTED 2026-02-28 01:03:08.926888 | orchestrator | 2026-02-28 01:03:08 | INFO  | Task 45b5cd2f-b896-47c6-82a0-58e2e55f0d39 is in state STARTED 2026-02-28 01:03:08.926934 | orchestrator | 2026-02-28 01:03:08 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:03:11.972442 | orchestrator | 2026-02-28 01:03:11 | INFO  | Task ed7b2359-c4a4-4150-882c-a92b5a1409ee is in state STARTED 2026-02-28 01:03:11.972528 | orchestrator | 2026-02-28 01:03:11 | INFO  | Task ade46703-9859-4d9b-9981-77970d56ece4 is in state STARTED 2026-02-28 01:03:11.976170 | orchestrator | 2026-02-28 01:03:11.976260 | orchestrator | 2026-02-28 01:03:11.976285 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2026-02-28 01:03:11.976307 | orchestrator | 2026-02-28 01:03:11.976326 | orchestrator | TASK [Check if ceph keys exist] ************************************************ 2026-02-28 01:03:11.976345 | orchestrator | Saturday 28 February 2026 01:01:32 +0000 (0:00:00.168) 0:00:00.168 ***** 2026-02-28 01:03:11.976365 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-02-28 01:03:11.976387 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-02-28 01:03:11.976405 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-02-28 01:03:11.976424 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-02-28 01:03:11.976442 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-02-28 01:03:11.976461 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-02-28 01:03:11.976480 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-02-28 01:03:11.976499 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-02-28 01:03:11.976518 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-02-28 01:03:11.976537 | orchestrator | 2026-02-28 01:03:11.976587 | orchestrator | TASK [Fetch all ceph keys] ***************************************************** 2026-02-28 01:03:11.976608 | orchestrator | Saturday 28 February 2026 01:01:37 +0000 (0:00:04.468) 0:00:04.636 ***** 2026-02-28 01:03:11.976627 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-02-28 01:03:11.976673 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-02-28 01:03:11.976693 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-02-28 01:03:11.976734 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-02-28 01:03:11.976756 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-02-28 01:03:11.976775 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-02-28 01:03:11.976794 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-02-28 01:03:11.976812 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-02-28 01:03:11.976831 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-02-28 01:03:11.976851 | orchestrator | 2026-02-28 01:03:11.976871 | orchestrator | TASK [Create share directory] ************************************************** 2026-02-28 01:03:11.976889 | orchestrator | Saturday 28 February 2026 01:01:41 +0000 (0:00:04.539) 0:00:09.176 ***** 2026-02-28 01:03:11.976909 | orchestrator | changed: [testbed-manager -> localhost] 2026-02-28 01:03:11.976959 | orchestrator | 2026-02-28 01:03:11.976980 | orchestrator | TASK [Write ceph keys to the share directory] ********************************** 2026-02-28 01:03:11.976998 | orchestrator | Saturday 28 February 2026 01:01:43 +0000 (0:00:01.115) 0:00:10.292 ***** 2026-02-28 01:03:11.977017 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2026-02-28 01:03:11.977036 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-02-28 01:03:11.977055 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-02-28 01:03:11.977074 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2026-02-28 01:03:11.977093 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-02-28 01:03:11.977111 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2026-02-28 01:03:11.977129 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2026-02-28 01:03:11.977148 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2026-02-28 01:03:11.977167 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2026-02-28 01:03:11.977187 | orchestrator | 2026-02-28 01:03:11.977207 | orchestrator | TASK [Check if target directories exist] *************************************** 2026-02-28 01:03:11.977225 | orchestrator | Saturday 28 February 2026 01:01:58 +0000 (0:00:15.138) 0:00:25.430 ***** 2026-02-28 01:03:11.977243 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/infrastructure/files/ceph) 2026-02-28 01:03:11.977262 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-volume) 2026-02-28 01:03:11.977282 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-02-28 01:03:11.977300 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-02-28 01:03:11.977340 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-02-28 01:03:11.977359 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-02-28 01:03:11.977378 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/glance) 2026-02-28 01:03:11.977397 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/gnocchi) 2026-02-28 01:03:11.977416 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/manila) 2026-02-28 01:03:11.977436 | orchestrator | 2026-02-28 01:03:11.977455 | orchestrator | TASK [Write ceph keys to the configuration directory] ************************** 2026-02-28 01:03:11.977473 | orchestrator | Saturday 28 February 2026 01:02:01 +0000 (0:00:03.192) 0:00:28.622 ***** 2026-02-28 01:03:11.977493 | orchestrator | changed: [testbed-manager] => (item=ceph.client.admin.keyring) 2026-02-28 01:03:11.977511 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-02-28 01:03:11.977531 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-02-28 01:03:11.977551 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder-backup.keyring) 2026-02-28 01:03:11.977570 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-02-28 01:03:11.977588 | orchestrator | changed: [testbed-manager] => (item=ceph.client.nova.keyring) 2026-02-28 01:03:11.977607 | orchestrator | changed: [testbed-manager] => (item=ceph.client.glance.keyring) 2026-02-28 01:03:11.977626 | orchestrator | changed: [testbed-manager] => (item=ceph.client.gnocchi.keyring) 2026-02-28 01:03:11.977730 | orchestrator | changed: [testbed-manager] => (item=ceph.client.manila.keyring) 2026-02-28 01:03:11.977749 | orchestrator | 2026-02-28 01:03:11.977765 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-28 01:03:11.977797 | orchestrator | testbed-manager : ok=6  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-28 01:03:11.977818 | orchestrator | 2026-02-28 01:03:11.977836 | orchestrator | 2026-02-28 01:03:11.977855 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-28 01:03:11.977875 | orchestrator | Saturday 28 February 2026 01:02:08 +0000 (0:00:07.103) 0:00:35.726 ***** 2026-02-28 01:03:11.977886 | orchestrator | =============================================================================== 2026-02-28 01:03:11.977898 | orchestrator | Write ceph keys to the share directory --------------------------------- 15.14s 2026-02-28 01:03:11.977909 | orchestrator | Write ceph keys to the configuration directory -------------------------- 7.10s 2026-02-28 01:03:11.977920 | orchestrator | Fetch all ceph keys ----------------------------------------------------- 4.54s 2026-02-28 01:03:11.977931 | orchestrator | Check if ceph keys exist ------------------------------------------------ 4.47s 2026-02-28 01:03:11.977942 | orchestrator | Check if target directories exist --------------------------------------- 3.19s 2026-02-28 01:03:11.977953 | orchestrator | Create share directory -------------------------------------------------- 1.12s 2026-02-28 01:03:11.977964 | orchestrator | 2026-02-28 01:03:11.977974 | orchestrator | 2026-02-28 01:03:11.977985 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2026-02-28 01:03:11.977996 | orchestrator | 2026-02-28 01:03:11.978007 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2026-02-28 01:03:11.978075 | orchestrator | Saturday 28 February 2026 01:02:13 +0000 (0:00:00.233) 0:00:00.233 ***** 2026-02-28 01:03:11.978090 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2026-02-28 01:03:11.978102 | orchestrator | 2026-02-28 01:03:11.978113 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2026-02-28 01:03:11.978124 | orchestrator | Saturday 28 February 2026 01:02:13 +0000 (0:00:00.232) 0:00:00.466 ***** 2026-02-28 01:03:11.978135 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2026-02-28 01:03:11.978146 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2026-02-28 01:03:11.978157 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2026-02-28 01:03:11.978169 | orchestrator | 2026-02-28 01:03:11.978179 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2026-02-28 01:03:11.978190 | orchestrator | Saturday 28 February 2026 01:02:14 +0000 (0:00:01.360) 0:00:01.826 ***** 2026-02-28 01:03:11.978201 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2026-02-28 01:03:11.978213 | orchestrator | 2026-02-28 01:03:11.978223 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2026-02-28 01:03:11.978234 | orchestrator | Saturday 28 February 2026 01:02:16 +0000 (0:00:01.774) 0:00:03.601 ***** 2026-02-28 01:03:11.978245 | orchestrator | changed: [testbed-manager] 2026-02-28 01:03:11.978256 | orchestrator | 2026-02-28 01:03:11.978267 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2026-02-28 01:03:11.978278 | orchestrator | Saturday 28 February 2026 01:02:17 +0000 (0:00:01.050) 0:00:04.651 ***** 2026-02-28 01:03:11.978289 | orchestrator | changed: [testbed-manager] 2026-02-28 01:03:11.978300 | orchestrator | 2026-02-28 01:03:11.978311 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2026-02-28 01:03:11.978322 | orchestrator | Saturday 28 February 2026 01:02:18 +0000 (0:00:01.022) 0:00:05.673 ***** 2026-02-28 01:03:11.978333 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2026-02-28 01:03:11.978344 | orchestrator | ok: [testbed-manager] 2026-02-28 01:03:11.978356 | orchestrator | 2026-02-28 01:03:11.978367 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2026-02-28 01:03:11.978389 | orchestrator | Saturday 28 February 2026 01:03:01 +0000 (0:00:42.221) 0:00:47.895 ***** 2026-02-28 01:03:11.978409 | orchestrator | changed: [testbed-manager] => (item=ceph) 2026-02-28 01:03:11.978420 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2026-02-28 01:03:11.978431 | orchestrator | changed: [testbed-manager] => (item=rados) 2026-02-28 01:03:11.978442 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2026-02-28 01:03:11.978453 | orchestrator | changed: [testbed-manager] => (item=rbd) 2026-02-28 01:03:11.978464 | orchestrator | 2026-02-28 01:03:11.978475 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2026-02-28 01:03:11.978486 | orchestrator | Saturday 28 February 2026 01:03:05 +0000 (0:00:04.207) 0:00:52.102 ***** 2026-02-28 01:03:11.978497 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2026-02-28 01:03:11.978508 | orchestrator | 2026-02-28 01:03:11.978519 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2026-02-28 01:03:11.978530 | orchestrator | Saturday 28 February 2026 01:03:05 +0000 (0:00:00.511) 0:00:52.613 ***** 2026-02-28 01:03:11.978541 | orchestrator | skipping: [testbed-manager] 2026-02-28 01:03:11.978552 | orchestrator | 2026-02-28 01:03:11.978563 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2026-02-28 01:03:11.978573 | orchestrator | Saturday 28 February 2026 01:03:05 +0000 (0:00:00.136) 0:00:52.750 ***** 2026-02-28 01:03:11.978584 | orchestrator | skipping: [testbed-manager] 2026-02-28 01:03:11.978595 | orchestrator | 2026-02-28 01:03:11.978606 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2026-02-28 01:03:11.978617 | orchestrator | Saturday 28 February 2026 01:03:06 +0000 (0:00:00.562) 0:00:53.313 ***** 2026-02-28 01:03:11.978628 | orchestrator | changed: [testbed-manager] 2026-02-28 01:03:11.978675 | orchestrator | 2026-02-28 01:03:11.978687 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2026-02-28 01:03:11.978697 | orchestrator | Saturday 28 February 2026 01:03:08 +0000 (0:00:01.623) 0:00:54.936 ***** 2026-02-28 01:03:11.978708 | orchestrator | changed: [testbed-manager] 2026-02-28 01:03:11.978719 | orchestrator | 2026-02-28 01:03:11.978730 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2026-02-28 01:03:11.978741 | orchestrator | Saturday 28 February 2026 01:03:08 +0000 (0:00:00.771) 0:00:55.708 ***** 2026-02-28 01:03:11.978752 | orchestrator | changed: [testbed-manager] 2026-02-28 01:03:11.978763 | orchestrator | 2026-02-28 01:03:11.978774 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2026-02-28 01:03:11.978791 | orchestrator | Saturday 28 February 2026 01:03:09 +0000 (0:00:00.635) 0:00:56.343 ***** 2026-02-28 01:03:11.978802 | orchestrator | ok: [testbed-manager] => (item=ceph) 2026-02-28 01:03:11.978813 | orchestrator | ok: [testbed-manager] => (item=rados) 2026-02-28 01:03:11.978825 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2026-02-28 01:03:11.978836 | orchestrator | ok: [testbed-manager] => (item=rbd) 2026-02-28 01:03:11.978847 | orchestrator | 2026-02-28 01:03:11.978858 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-28 01:03:11.978869 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-28 01:03:11.978881 | orchestrator | 2026-02-28 01:03:11.978891 | orchestrator | 2026-02-28 01:03:11.978902 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-28 01:03:11.978914 | orchestrator | Saturday 28 February 2026 01:03:11 +0000 (0:00:01.675) 0:00:58.019 ***** 2026-02-28 01:03:11.978925 | orchestrator | =============================================================================== 2026-02-28 01:03:11.978936 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 42.22s 2026-02-28 01:03:11.978947 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 4.21s 2026-02-28 01:03:11.978958 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.77s 2026-02-28 01:03:11.978968 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 1.68s 2026-02-28 01:03:11.978986 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 1.62s 2026-02-28 01:03:11.978998 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.36s 2026-02-28 01:03:11.979009 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 1.05s 2026-02-28 01:03:11.979019 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 1.02s 2026-02-28 01:03:11.979031 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 0.77s 2026-02-28 01:03:11.979042 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.64s 2026-02-28 01:03:11.979053 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.56s 2026-02-28 01:03:11.979064 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.51s 2026-02-28 01:03:11.979074 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.23s 2026-02-28 01:03:11.979085 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.14s 2026-02-28 01:03:11.979096 | orchestrator | 2026-02-28 01:03:11 | INFO  | Task 45b5cd2f-b896-47c6-82a0-58e2e55f0d39 is in state SUCCESS 2026-02-28 01:03:11.979108 | orchestrator | 2026-02-28 01:03:11 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:03:15.027027 | orchestrator | 2026-02-28 01:03:15.027324 | orchestrator | 2026-02-28 01:03:15.027345 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-28 01:03:15.027357 | orchestrator | 2026-02-28 01:03:15.027369 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-28 01:03:15.027380 | orchestrator | Saturday 28 February 2026 01:01:17 +0000 (0:00:00.266) 0:00:00.266 ***** 2026-02-28 01:03:15.027392 | orchestrator | ok: [testbed-node-0] 2026-02-28 01:03:15.027404 | orchestrator | ok: [testbed-node-1] 2026-02-28 01:03:15.027415 | orchestrator | ok: [testbed-node-2] 2026-02-28 01:03:15.027424 | orchestrator | 2026-02-28 01:03:15.027431 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-28 01:03:15.027437 | orchestrator | Saturday 28 February 2026 01:01:17 +0000 (0:00:00.305) 0:00:00.572 ***** 2026-02-28 01:03:15.027444 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2026-02-28 01:03:15.027451 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2026-02-28 01:03:15.027459 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2026-02-28 01:03:15.027465 | orchestrator | 2026-02-28 01:03:15.027472 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2026-02-28 01:03:15.027478 | orchestrator | 2026-02-28 01:03:15.027485 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-02-28 01:03:15.027491 | orchestrator | Saturday 28 February 2026 01:01:18 +0000 (0:00:00.475) 0:00:01.047 ***** 2026-02-28 01:03:15.027498 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 01:03:15.027506 | orchestrator | 2026-02-28 01:03:15.027512 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2026-02-28 01:03:15.027518 | orchestrator | Saturday 28 February 2026 01:01:18 +0000 (0:00:00.556) 0:00:01.604 ***** 2026-02-28 01:03:15.027544 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-28 01:03:15.027609 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-28 01:03:15.027623 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-28 01:03:15.027662 | orchestrator | 2026-02-28 01:03:15.027670 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2026-02-28 01:03:15.027677 | orchestrator | Saturday 28 February 2026 01:01:19 +0000 (0:00:01.090) 0:00:02.694 ***** 2026-02-28 01:03:15.027683 | orchestrator | ok: [testbed-node-0] 2026-02-28 01:03:15.027689 | orchestrator | ok: [testbed-node-1] 2026-02-28 01:03:15.027696 | orchestrator | ok: [testbed-node-2] 2026-02-28 01:03:15.027702 | orchestrator | 2026-02-28 01:03:15.027709 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-02-28 01:03:15.027727 | orchestrator | Saturday 28 February 2026 01:01:20 +0000 (0:00:00.547) 0:00:03.242 ***** 2026-02-28 01:03:15.027733 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2026-02-28 01:03:15.027740 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'heat', 'enabled': 'no'})  2026-02-28 01:03:15.027746 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2026-02-28 01:03:15.027753 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2026-02-28 01:03:15.027759 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2026-02-28 01:03:15.027765 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2026-02-28 01:03:15.027772 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2026-02-28 01:03:15.027778 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2026-02-28 01:03:15.027784 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2026-02-28 01:03:15.027791 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'heat', 'enabled': 'no'})  2026-02-28 01:03:15.027797 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2026-02-28 01:03:15.027803 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2026-02-28 01:03:15.027809 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2026-02-28 01:03:15.027816 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2026-02-28 01:03:15.027827 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2026-02-28 01:03:15.027833 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2026-02-28 01:03:15.027840 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2026-02-28 01:03:15.027846 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'heat', 'enabled': 'no'})  2026-02-28 01:03:15.027852 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2026-02-28 01:03:15.027859 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2026-02-28 01:03:15.027865 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2026-02-28 01:03:15.027871 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2026-02-28 01:03:15.027882 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2026-02-28 01:03:15.027888 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2026-02-28 01:03:15.027896 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2026-02-28 01:03:15.027904 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2026-02-28 01:03:15.027911 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2026-02-28 01:03:15.027917 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2026-02-28 01:03:15.027923 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2026-02-28 01:03:15.027930 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2026-02-28 01:03:15.027937 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2026-02-28 01:03:15.027943 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2026-02-28 01:03:15.027949 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2026-02-28 01:03:15.027957 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2026-02-28 01:03:15.027963 | orchestrator | 2026-02-28 01:03:15.027970 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-28 01:03:15.027976 | orchestrator | Saturday 28 February 2026 01:01:21 +0000 (0:00:00.826) 0:00:04.069 ***** 2026-02-28 01:03:15.027982 | orchestrator | ok: [testbed-node-0] 2026-02-28 01:03:15.027989 | orchestrator | ok: [testbed-node-1] 2026-02-28 01:03:15.027995 | orchestrator | ok: [testbed-node-2] 2026-02-28 01:03:15.028001 | orchestrator | 2026-02-28 01:03:15.028012 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-28 01:03:15.028019 | orchestrator | Saturday 28 February 2026 01:01:21 +0000 (0:00:00.325) 0:00:04.395 ***** 2026-02-28 01:03:15.028025 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:03:15.028032 | orchestrator | 2026-02-28 01:03:15.028039 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-28 01:03:15.028045 | orchestrator | Saturday 28 February 2026 01:01:21 +0000 (0:00:00.141) 0:00:04.537 ***** 2026-02-28 01:03:15.028057 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:03:15.028063 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:03:15.028069 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:03:15.028076 | orchestrator | 2026-02-28 01:03:15.028082 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-28 01:03:15.028088 | orchestrator | Saturday 28 February 2026 01:01:22 +0000 (0:00:00.513) 0:00:05.050 ***** 2026-02-28 01:03:15.028094 | orchestrator | ok: [testbed-node-0] 2026-02-28 01:03:15.028101 | orchestrator | ok: [testbed-node-1] 2026-02-28 01:03:15.028107 | orchestrator | ok: [testbed-node-2] 2026-02-28 01:03:15.028113 | orchestrator | 2026-02-28 01:03:15.028120 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-28 01:03:15.028126 | orchestrator | Saturday 28 February 2026 01:01:22 +0000 (0:00:00.314) 0:00:05.364 ***** 2026-02-28 01:03:15.028132 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:03:15.028138 | orchestrator | 2026-02-28 01:03:15.028145 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-28 01:03:15.028151 | orchestrator | Saturday 28 February 2026 01:01:22 +0000 (0:00:00.141) 0:00:05.505 ***** 2026-02-28 01:03:15.028157 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:03:15.028164 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:03:15.028170 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:03:15.028176 | orchestrator | 2026-02-28 01:03:15.028182 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-28 01:03:15.028189 | orchestrator | Saturday 28 February 2026 01:01:23 +0000 (0:00:00.291) 0:00:05.797 ***** 2026-02-28 01:03:15.028195 | orchestrator | ok: [testbed-node-0] 2026-02-28 01:03:15.028264 | orchestrator | ok: [testbed-node-1] 2026-02-28 01:03:15.028273 | orchestrator | ok: [testbed-node-2] 2026-02-28 01:03:15.028279 | orchestrator | 2026-02-28 01:03:15.028286 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-28 01:03:15.028294 | orchestrator | Saturday 28 February 2026 01:01:23 +0000 (0:00:00.336) 0:00:06.134 ***** 2026-02-28 01:03:15.028306 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:03:15.028316 | orchestrator | 2026-02-28 01:03:15.028327 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-28 01:03:15.028338 | orchestrator | Saturday 28 February 2026 01:01:23 +0000 (0:00:00.340) 0:00:06.474 ***** 2026-02-28 01:03:15.028349 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:03:15.028360 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:03:15.028370 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:03:15.028380 | orchestrator | 2026-02-28 01:03:15.028391 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-28 01:03:15.028402 | orchestrator | Saturday 28 February 2026 01:01:24 +0000 (0:00:00.326) 0:00:06.801 ***** 2026-02-28 01:03:15.028408 | orchestrator | ok: [testbed-node-0] 2026-02-28 01:03:15.028415 | orchestrator | ok: [testbed-node-1] 2026-02-28 01:03:15.028421 | orchestrator | ok: [testbed-node-2] 2026-02-28 01:03:15.028427 | orchestrator | 2026-02-28 01:03:15.028433 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-28 01:03:15.028440 | orchestrator | Saturday 28 February 2026 01:01:24 +0000 (0:00:00.352) 0:00:07.153 ***** 2026-02-28 01:03:15.028446 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:03:15.028452 | orchestrator | 2026-02-28 01:03:15.028458 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-28 01:03:15.028465 | orchestrator | Saturday 28 February 2026 01:01:24 +0000 (0:00:00.130) 0:00:07.284 ***** 2026-02-28 01:03:15.028471 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:03:15.028477 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:03:15.028483 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:03:15.028490 | orchestrator | 2026-02-28 01:03:15.028496 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-28 01:03:15.028502 | orchestrator | Saturday 28 February 2026 01:01:24 +0000 (0:00:00.291) 0:00:07.576 ***** 2026-02-28 01:03:15.028508 | orchestrator | ok: [testbed-node-0] 2026-02-28 01:03:15.028520 | orchestrator | ok: [testbed-node-1] 2026-02-28 01:03:15.028527 | orchestrator | ok: [testbed-node-2] 2026-02-28 01:03:15.028533 | orchestrator | 2026-02-28 01:03:15.028539 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-28 01:03:15.028545 | orchestrator | Saturday 28 February 2026 01:01:25 +0000 (0:00:00.535) 0:00:08.112 ***** 2026-02-28 01:03:15.028552 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:03:15.028558 | orchestrator | 2026-02-28 01:03:15.028564 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-28 01:03:15.028570 | orchestrator | Saturday 28 February 2026 01:01:25 +0000 (0:00:00.146) 0:00:08.259 ***** 2026-02-28 01:03:15.028581 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:03:15.028594 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:03:15.028608 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:03:15.028619 | orchestrator | 2026-02-28 01:03:15.028647 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-28 01:03:15.028659 | orchestrator | Saturday 28 February 2026 01:01:25 +0000 (0:00:00.323) 0:00:08.582 ***** 2026-02-28 01:03:15.028668 | orchestrator | ok: [testbed-node-0] 2026-02-28 01:03:15.028678 | orchestrator | ok: [testbed-node-1] 2026-02-28 01:03:15.028688 | orchestrator | ok: [testbed-node-2] 2026-02-28 01:03:15.028698 | orchestrator | 2026-02-28 01:03:15.028706 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-28 01:03:15.028715 | orchestrator | Saturday 28 February 2026 01:01:26 +0000 (0:00:00.327) 0:00:08.910 ***** 2026-02-28 01:03:15.028724 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:03:15.028732 | orchestrator | 2026-02-28 01:03:15.028742 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-28 01:03:15.028751 | orchestrator | Saturday 28 February 2026 01:01:26 +0000 (0:00:00.134) 0:00:09.044 ***** 2026-02-28 01:03:15.028760 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:03:15.028771 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:03:15.028790 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:03:15.028800 | orchestrator | 2026-02-28 01:03:15.028810 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-28 01:03:15.028821 | orchestrator | Saturday 28 February 2026 01:01:26 +0000 (0:00:00.335) 0:00:09.380 ***** 2026-02-28 01:03:15.028831 | orchestrator | ok: [testbed-node-0] 2026-02-28 01:03:15.028841 | orchestrator | ok: [testbed-node-1] 2026-02-28 01:03:15.028852 | orchestrator | ok: [testbed-node-2] 2026-02-28 01:03:15.028859 | orchestrator | 2026-02-28 01:03:15.028866 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-28 01:03:15.028872 | orchestrator | Saturday 28 February 2026 01:01:27 +0000 (0:00:00.584) 0:00:09.964 ***** 2026-02-28 01:03:15.028878 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:03:15.028884 | orchestrator | 2026-02-28 01:03:15.028891 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-28 01:03:15.028897 | orchestrator | Saturday 28 February 2026 01:01:27 +0000 (0:00:00.150) 0:00:10.115 ***** 2026-02-28 01:03:15.028903 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:03:15.028910 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:03:15.028916 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:03:15.028922 | orchestrator | 2026-02-28 01:03:15.028928 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-28 01:03:15.028935 | orchestrator | Saturday 28 February 2026 01:01:27 +0000 (0:00:00.381) 0:00:10.496 ***** 2026-02-28 01:03:15.028941 | orchestrator | ok: [testbed-node-0] 2026-02-28 01:03:15.028947 | orchestrator | ok: [testbed-node-1] 2026-02-28 01:03:15.028953 | orchestrator | ok: [testbed-node-2] 2026-02-28 01:03:15.028960 | orchestrator | 2026-02-28 01:03:15.028966 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-28 01:03:15.028972 | orchestrator | Saturday 28 February 2026 01:01:28 +0000 (0:00:00.370) 0:00:10.867 ***** 2026-02-28 01:03:15.028978 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:03:15.028984 | orchestrator | 2026-02-28 01:03:15.028999 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-28 01:03:15.029006 | orchestrator | Saturday 28 February 2026 01:01:28 +0000 (0:00:00.143) 0:00:11.010 ***** 2026-02-28 01:03:15.029012 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:03:15.029018 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:03:15.029024 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:03:15.029031 | orchestrator | 2026-02-28 01:03:15.029037 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-28 01:03:15.029043 | orchestrator | Saturday 28 February 2026 01:01:28 +0000 (0:00:00.693) 0:00:11.704 ***** 2026-02-28 01:03:15.029049 | orchestrator | ok: [testbed-node-0] 2026-02-28 01:03:15.029056 | orchestrator | ok: [testbed-node-1] 2026-02-28 01:03:15.029063 | orchestrator | ok: [testbed-node-2] 2026-02-28 01:03:15.029074 | orchestrator | 2026-02-28 01:03:15.029090 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-28 01:03:15.029101 | orchestrator | Saturday 28 February 2026 01:01:29 +0000 (0:00:00.456) 0:00:12.160 ***** 2026-02-28 01:03:15.029112 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:03:15.029122 | orchestrator | 2026-02-28 01:03:15.029137 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-28 01:03:15.029148 | orchestrator | Saturday 28 February 2026 01:01:29 +0000 (0:00:00.188) 0:00:12.349 ***** 2026-02-28 01:03:15.029158 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:03:15.029169 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:03:15.029180 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:03:15.029190 | orchestrator | 2026-02-28 01:03:15.029200 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-28 01:03:15.029211 | orchestrator | Saturday 28 February 2026 01:01:30 +0000 (0:00:00.449) 0:00:12.798 ***** 2026-02-28 01:03:15.029218 | orchestrator | ok: [testbed-node-0] 2026-02-28 01:03:15.029225 | orchestrator | ok: [testbed-node-1] 2026-02-28 01:03:15.029231 | orchestrator | ok: [testbed-node-2] 2026-02-28 01:03:15.029237 | orchestrator | 2026-02-28 01:03:15.029243 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-28 01:03:15.029250 | orchestrator | Saturday 28 February 2026 01:01:30 +0000 (0:00:00.372) 0:00:13.171 ***** 2026-02-28 01:03:15.029256 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:03:15.029262 | orchestrator | 2026-02-28 01:03:15.029268 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-28 01:03:15.029275 | orchestrator | Saturday 28 February 2026 01:01:30 +0000 (0:00:00.149) 0:00:13.320 ***** 2026-02-28 01:03:15.029281 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:03:15.029287 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:03:15.029297 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:03:15.029307 | orchestrator | 2026-02-28 01:03:15.029317 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2026-02-28 01:03:15.029327 | orchestrator | Saturday 28 February 2026 01:01:31 +0000 (0:00:00.589) 0:00:13.910 ***** 2026-02-28 01:03:15.029339 | orchestrator | changed: [testbed-node-1] 2026-02-28 01:03:15.029349 | orchestrator | changed: [testbed-node-2] 2026-02-28 01:03:15.029360 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:03:15.029371 | orchestrator | 2026-02-28 01:03:15.029381 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2026-02-28 01:03:15.029391 | orchestrator | Saturday 28 February 2026 01:01:33 +0000 (0:00:01.952) 0:00:15.862 ***** 2026-02-28 01:03:15.029403 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-02-28 01:03:15.029413 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-02-28 01:03:15.029424 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-02-28 01:03:15.029431 | orchestrator | 2026-02-28 01:03:15.029437 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2026-02-28 01:03:15.029443 | orchestrator | Saturday 28 February 2026 01:01:35 +0000 (0:00:01.970) 0:00:17.832 ***** 2026-02-28 01:03:15.029456 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-02-28 01:03:15.029463 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-02-28 01:03:15.029476 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-02-28 01:03:15.029483 | orchestrator | 2026-02-28 01:03:15.029489 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2026-02-28 01:03:15.029496 | orchestrator | Saturday 28 February 2026 01:01:37 +0000 (0:00:02.466) 0:00:20.299 ***** 2026-02-28 01:03:15.029502 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-02-28 01:03:15.029508 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-02-28 01:03:15.029515 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-02-28 01:03:15.029521 | orchestrator | 2026-02-28 01:03:15.029527 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2026-02-28 01:03:15.029534 | orchestrator | Saturday 28 February 2026 01:01:39 +0000 (0:00:02.358) 0:00:22.657 ***** 2026-02-28 01:03:15.029540 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:03:15.029546 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:03:15.029552 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:03:15.029559 | orchestrator | 2026-02-28 01:03:15.029565 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2026-02-28 01:03:15.029571 | orchestrator | Saturday 28 February 2026 01:01:40 +0000 (0:00:00.360) 0:00:23.018 ***** 2026-02-28 01:03:15.029577 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:03:15.029584 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:03:15.029590 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:03:15.029596 | orchestrator | 2026-02-28 01:03:15.029602 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-02-28 01:03:15.029609 | orchestrator | Saturday 28 February 2026 01:01:40 +0000 (0:00:00.338) 0:00:23.357 ***** 2026-02-28 01:03:15.029615 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 01:03:15.029621 | orchestrator | 2026-02-28 01:03:15.029628 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2026-02-28 01:03:15.029679 | orchestrator | Saturday 28 February 2026 01:01:41 +0000 (0:00:00.809) 0:00:24.167 ***** 2026-02-28 01:03:15.029694 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-28 01:03:15.029717 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-28 01:03:15.029739 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-28 01:03:15.029751 | orchestrator | 2026-02-28 01:03:15.029757 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2026-02-28 01:03:15.029764 | orchestrator | Saturday 28 February 2026 01:01:43 +0000 (0:00:01.704) 0:00:25.871 ***** 2026-02-28 01:03:15.029775 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-28 01:03:15.029782 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:03:15.029794 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '82026-02-28 01:03:15 | INFO  | Task ed7b2359-c4a4-4150-882c-a92b5a1409ee is in state SUCCESS 2026-02-28 01:03:15.029805 | orchestrator | 0', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-28 01:03:15.029812 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:03:15.029822 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-28 01:03:15.029833 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:03:15.029838 | orchestrator | 2026-02-28 01:03:15.029844 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2026-02-28 01:03:15.029849 | orchestrator | Saturday 28 February 2026 01:01:43 +0000 (0:00:00.659) 0:00:26.530 ***** 2026-02-28 01:03:15.029860 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-28 01:03:15.029866 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:03:15.029876 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-28 01:03:15.029887 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:03:15.029897 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-28 01:03:15.029904 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:03:15.029909 | orchestrator | 2026-02-28 01:03:15.029915 | orchestrator | TASK [service-check-containers : horizon | Check containers] ******************* 2026-02-28 01:03:15.029921 | orchestrator | Saturday 28 February 2026 01:01:44 +0000 (0:00:01.169) 0:00:27.700 ***** 2026-02-28 01:03:15.029930 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-28 01:03:15.029951 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-28 01:03:15.029962 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-28 01:03:15.029972 | orchestrator | 2026-02-28 01:03:15.029978 | orchestrator | TASK [service-check-containers : horizon | Notify handlers to restart containers] *** 2026-02-28 01:03:15.029984 | orchestrator | Saturday 28 February 2026 01:01:47 +0000 (0:00:02.148) 0:00:29.849 ***** 2026-02-28 01:03:15.029989 | orchestrator | changed: [testbed-node-0] => { 2026-02-28 01:03:15.029995 | orchestrator |  "msg": "Notifying handlers" 2026-02-28 01:03:15.030001 | orchestrator | } 2026-02-28 01:03:15.030006 | orchestrator | changed: [testbed-node-1] => { 2026-02-28 01:03:15.030012 | orchestrator |  "msg": "Notifying handlers" 2026-02-28 01:03:15.030060 | orchestrator | } 2026-02-28 01:03:15.030066 | orchestrator | changed: [testbed-node-2] => { 2026-02-28 01:03:15.030072 | orchestrator |  "msg": "Notifying handlers" 2026-02-28 01:03:15.030077 | orchestrator | } 2026-02-28 01:03:15.030083 | orchestrator | 2026-02-28 01:03:15.030088 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-28 01:03:15.030094 | orchestrator | Saturday 28 February 2026 01:01:47 +0000 (0:00:00.525) 0:00:30.375 ***** 2026-02-28 01:03:15.030103 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-28 01:03:15.030114 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:03:15.030126 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-28 01:03:15.030133 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:03:15.030142 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-28 01:03:15.030155 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:03:15.030161 | orchestrator | 2026-02-28 01:03:15.030166 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-02-28 01:03:15.030172 | orchestrator | Saturday 28 February 2026 01:01:48 +0000 (0:00:01.106) 0:00:31.481 ***** 2026-02-28 01:03:15.030177 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:03:15.030183 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:03:15.030188 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:03:15.030197 | orchestrator | 2026-02-28 01:03:15.030206 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-02-28 01:03:15.030215 | orchestrator | Saturday 28 February 2026 01:01:49 +0000 (0:00:00.680) 0:00:32.162 ***** 2026-02-28 01:03:15.030229 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 01:03:15.030239 | orchestrator | 2026-02-28 01:03:15.030249 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2026-02-28 01:03:15.030259 | orchestrator | Saturday 28 February 2026 01:01:50 +0000 (0:00:00.674) 0:00:32.837 ***** 2026-02-28 01:03:15.030268 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:03:15.030273 | orchestrator | 2026-02-28 01:03:15.030279 | orchestrator | TASK [horizon : Creating Horizon database user and setting permissions] ******** 2026-02-28 01:03:15.030284 | orchestrator | Saturday 28 February 2026 01:01:52 +0000 (0:00:02.420) 0:00:35.257 ***** 2026-02-28 01:03:15.030290 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:03:15.030299 | orchestrator | 2026-02-28 01:03:15.030308 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2026-02-28 01:03:15.030317 | orchestrator | Saturday 28 February 2026 01:01:54 +0000 (0:00:02.372) 0:00:37.629 ***** 2026-02-28 01:03:15.030326 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:03:15.030335 | orchestrator | 2026-02-28 01:03:15.030344 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-02-28 01:03:15.030353 | orchestrator | Saturday 28 February 2026 01:02:12 +0000 (0:00:17.532) 0:00:55.162 ***** 2026-02-28 01:03:15.030363 | orchestrator | 2026-02-28 01:03:15.030369 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-02-28 01:03:15.030380 | orchestrator | Saturday 28 February 2026 01:02:12 +0000 (0:00:00.063) 0:00:55.225 ***** 2026-02-28 01:03:15.030385 | orchestrator | 2026-02-28 01:03:15.030391 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-02-28 01:03:15.030397 | orchestrator | Saturday 28 February 2026 01:02:12 +0000 (0:00:00.262) 0:00:55.488 ***** 2026-02-28 01:03:15.030402 | orchestrator | 2026-02-28 01:03:15.030408 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2026-02-28 01:03:15.030413 | orchestrator | Saturday 28 February 2026 01:02:12 +0000 (0:00:00.067) 0:00:55.556 ***** 2026-02-28 01:03:15.030418 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:03:15.030424 | orchestrator | changed: [testbed-node-1] 2026-02-28 01:03:15.030430 | orchestrator | changed: [testbed-node-2] 2026-02-28 01:03:15.030435 | orchestrator | 2026-02-28 01:03:15.030440 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-28 01:03:15.030446 | orchestrator | testbed-node-0 : ok=38  changed=12  unreachable=0 failed=0 skipped=26  rescued=0 ignored=0 2026-02-28 01:03:15.030453 | orchestrator | testbed-node-1 : ok=35  changed=9  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2026-02-28 01:03:15.030459 | orchestrator | testbed-node-2 : ok=35  changed=9  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2026-02-28 01:03:15.030464 | orchestrator | 2026-02-28 01:03:15.030470 | orchestrator | 2026-02-28 01:03:15.030479 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-28 01:03:15.030485 | orchestrator | Saturday 28 February 2026 01:03:12 +0000 (0:00:59.276) 0:01:54.832 ***** 2026-02-28 01:03:15.030490 | orchestrator | =============================================================================== 2026-02-28 01:03:15.030496 | orchestrator | horizon : Restart horizon container ------------------------------------ 59.28s 2026-02-28 01:03:15.030501 | orchestrator | horizon : Running Horizon bootstrap container -------------------------- 17.53s 2026-02-28 01:03:15.030507 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 2.47s 2026-02-28 01:03:15.030512 | orchestrator | horizon : Creating Horizon database ------------------------------------- 2.42s 2026-02-28 01:03:15.030517 | orchestrator | horizon : Creating Horizon database user and setting permissions -------- 2.37s 2026-02-28 01:03:15.030523 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 2.36s 2026-02-28 01:03:15.030528 | orchestrator | service-check-containers : horizon | Check containers ------------------- 2.15s 2026-02-28 01:03:15.030534 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 1.97s 2026-02-28 01:03:15.030539 | orchestrator | horizon : Copying over config.json files for services ------------------- 1.95s 2026-02-28 01:03:15.030545 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 1.70s 2026-02-28 01:03:15.030550 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 1.17s 2026-02-28 01:03:15.030556 | orchestrator | service-check-containers : Include tasks -------------------------------- 1.11s 2026-02-28 01:03:15.030561 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 1.09s 2026-02-28 01:03:15.030567 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.83s 2026-02-28 01:03:15.030572 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.81s 2026-02-28 01:03:15.030690 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.69s 2026-02-28 01:03:15.030697 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.68s 2026-02-28 01:03:15.030702 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.67s 2026-02-28 01:03:15.030708 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 0.66s 2026-02-28 01:03:15.030719 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.59s 2026-02-28 01:03:15.030724 | orchestrator | 2026-02-28 01:03:15 | INFO  | Task e07a3aae-281f-4024-8b8b-21e92e17139a is in state STARTED 2026-02-28 01:03:15.030733 | orchestrator | 2026-02-28 01:03:15 | INFO  | Task b994309c-c798-418b-a4f7-87a36b92ec01 is in state STARTED 2026-02-28 01:03:15.031988 | orchestrator | 2026-02-28 01:03:15 | INFO  | Task ade46703-9859-4d9b-9981-77970d56ece4 is in state STARTED 2026-02-28 01:03:15.033070 | orchestrator | 2026-02-28 01:03:15 | INFO  | Task 5892ea56-b2e8-4d51-aeea-39c6926a6ba1 is in state STARTED 2026-02-28 01:03:15.033446 | orchestrator | 2026-02-28 01:03:15 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:03:18.066602 | orchestrator | 2026-02-28 01:03:18 | INFO  | Task e07a3aae-281f-4024-8b8b-21e92e17139a is in state STARTED 2026-02-28 01:03:18.066812 | orchestrator | 2026-02-28 01:03:18 | INFO  | Task b994309c-c798-418b-a4f7-87a36b92ec01 is in state STARTED 2026-02-28 01:03:18.068232 | orchestrator | 2026-02-28 01:03:18 | INFO  | Task ade46703-9859-4d9b-9981-77970d56ece4 is in state STARTED 2026-02-28 01:03:18.071067 | orchestrator | 2026-02-28 01:03:18 | INFO  | Task 5892ea56-b2e8-4d51-aeea-39c6926a6ba1 is in state STARTED 2026-02-28 01:03:18.071107 | orchestrator | 2026-02-28 01:03:18 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:03:21.102332 | orchestrator | 2026-02-28 01:03:21 | INFO  | Task e07a3aae-281f-4024-8b8b-21e92e17139a is in state STARTED 2026-02-28 01:03:21.106259 | orchestrator | 2026-02-28 01:03:21 | INFO  | Task b994309c-c798-418b-a4f7-87a36b92ec01 is in state STARTED 2026-02-28 01:03:21.106342 | orchestrator | 2026-02-28 01:03:21 | INFO  | Task ade46703-9859-4d9b-9981-77970d56ece4 is in state STARTED 2026-02-28 01:03:21.106352 | orchestrator | 2026-02-28 01:03:21 | INFO  | Task 5892ea56-b2e8-4d51-aeea-39c6926a6ba1 is in state STARTED 2026-02-28 01:03:21.106360 | orchestrator | 2026-02-28 01:03:21 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:03:24.162543 | orchestrator | 2026-02-28 01:03:24 | INFO  | Task e07a3aae-281f-4024-8b8b-21e92e17139a is in state STARTED 2026-02-28 01:03:24.162683 | orchestrator | 2026-02-28 01:03:24 | INFO  | Task b994309c-c798-418b-a4f7-87a36b92ec01 is in state STARTED 2026-02-28 01:03:24.162697 | orchestrator | 2026-02-28 01:03:24 | INFO  | Task ade46703-9859-4d9b-9981-77970d56ece4 is in state STARTED 2026-02-28 01:03:24.162704 | orchestrator | 2026-02-28 01:03:24 | INFO  | Task 5892ea56-b2e8-4d51-aeea-39c6926a6ba1 is in state STARTED 2026-02-28 01:03:24.162712 | orchestrator | 2026-02-28 01:03:24 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:03:27.257726 | orchestrator | 2026-02-28 01:03:27 | INFO  | Task e07a3aae-281f-4024-8b8b-21e92e17139a is in state STARTED 2026-02-28 01:03:27.257839 | orchestrator | 2026-02-28 01:03:27 | INFO  | Task b994309c-c798-418b-a4f7-87a36b92ec01 is in state STARTED 2026-02-28 01:03:27.257892 | orchestrator | 2026-02-28 01:03:27 | INFO  | Task ade46703-9859-4d9b-9981-77970d56ece4 is in state STARTED 2026-02-28 01:03:27.257913 | orchestrator | 2026-02-28 01:03:27 | INFO  | Task 5892ea56-b2e8-4d51-aeea-39c6926a6ba1 is in state STARTED 2026-02-28 01:03:27.257931 | orchestrator | 2026-02-28 01:03:27 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:03:30.240710 | orchestrator | 2026-02-28 01:03:30 | INFO  | Task e07a3aae-281f-4024-8b8b-21e92e17139a is in state STARTED 2026-02-28 01:03:30.241596 | orchestrator | 2026-02-28 01:03:30 | INFO  | Task b994309c-c798-418b-a4f7-87a36b92ec01 is in state STARTED 2026-02-28 01:03:30.241846 | orchestrator | 2026-02-28 01:03:30 | INFO  | Task ade46703-9859-4d9b-9981-77970d56ece4 is in state STARTED 2026-02-28 01:03:30.242716 | orchestrator | 2026-02-28 01:03:30 | INFO  | Task 9c9c3702-3f62-49d3-a5e2-56d961667a99 is in state STARTED 2026-02-28 01:03:30.243628 | orchestrator | 2026-02-28 01:03:30 | INFO  | Task 7412b3b9-a78a-42fe-89ac-55c2d9ab9531 is in state STARTED 2026-02-28 01:03:30.244899 | orchestrator | 2026-02-28 01:03:30 | INFO  | Task 5892ea56-b2e8-4d51-aeea-39c6926a6ba1 is in state SUCCESS 2026-02-28 01:03:30.244944 | orchestrator | 2026-02-28 01:03:30 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:03:33.278447 | orchestrator | 2026-02-28 01:03:33 | INFO  | Task e07a3aae-281f-4024-8b8b-21e92e17139a is in state STARTED 2026-02-28 01:03:33.279126 | orchestrator | 2026-02-28 01:03:33 | INFO  | Task b994309c-c798-418b-a4f7-87a36b92ec01 is in state STARTED 2026-02-28 01:03:33.280065 | orchestrator | 2026-02-28 01:03:33 | INFO  | Task ade46703-9859-4d9b-9981-77970d56ece4 is in state STARTED 2026-02-28 01:03:33.280920 | orchestrator | 2026-02-28 01:03:33 | INFO  | Task 9c9c3702-3f62-49d3-a5e2-56d961667a99 is in state STARTED 2026-02-28 01:03:33.282753 | orchestrator | 2026-02-28 01:03:33 | INFO  | Task 7412b3b9-a78a-42fe-89ac-55c2d9ab9531 is in state STARTED 2026-02-28 01:03:33.282776 | orchestrator | 2026-02-28 01:03:33 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:03:36.320857 | orchestrator | 2026-02-28 01:03:36 | INFO  | Task e07a3aae-281f-4024-8b8b-21e92e17139a is in state STARTED 2026-02-28 01:03:36.321171 | orchestrator | 2026-02-28 01:03:36 | INFO  | Task b994309c-c798-418b-a4f7-87a36b92ec01 is in state STARTED 2026-02-28 01:03:36.322144 | orchestrator | 2026-02-28 01:03:36 | INFO  | Task ade46703-9859-4d9b-9981-77970d56ece4 is in state STARTED 2026-02-28 01:03:36.322720 | orchestrator | 2026-02-28 01:03:36 | INFO  | Task 9c9c3702-3f62-49d3-a5e2-56d961667a99 is in state STARTED 2026-02-28 01:03:36.323752 | orchestrator | 2026-02-28 01:03:36 | INFO  | Task 7412b3b9-a78a-42fe-89ac-55c2d9ab9531 is in state STARTED 2026-02-28 01:03:36.323796 | orchestrator | 2026-02-28 01:03:36 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:03:39.345804 | orchestrator | 2026-02-28 01:03:39 | INFO  | Task e07a3aae-281f-4024-8b8b-21e92e17139a is in state STARTED 2026-02-28 01:03:39.346179 | orchestrator | 2026-02-28 01:03:39 | INFO  | Task b994309c-c798-418b-a4f7-87a36b92ec01 is in state STARTED 2026-02-28 01:03:39.348569 | orchestrator | 2026-02-28 01:03:39 | INFO  | Task ade46703-9859-4d9b-9981-77970d56ece4 is in state STARTED 2026-02-28 01:03:39.349167 | orchestrator | 2026-02-28 01:03:39 | INFO  | Task 9c9c3702-3f62-49d3-a5e2-56d961667a99 is in state STARTED 2026-02-28 01:03:39.349954 | orchestrator | 2026-02-28 01:03:39 | INFO  | Task 7412b3b9-a78a-42fe-89ac-55c2d9ab9531 is in state STARTED 2026-02-28 01:03:39.349991 | orchestrator | 2026-02-28 01:03:39 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:03:42.448588 | orchestrator | 2026-02-28 01:03:42 | INFO  | Task e07a3aae-281f-4024-8b8b-21e92e17139a is in state STARTED 2026-02-28 01:03:42.448784 | orchestrator | 2026-02-28 01:03:42 | INFO  | Task b994309c-c798-418b-a4f7-87a36b92ec01 is in state STARTED 2026-02-28 01:03:42.448798 | orchestrator | 2026-02-28 01:03:42 | INFO  | Task ade46703-9859-4d9b-9981-77970d56ece4 is in state STARTED 2026-02-28 01:03:42.448803 | orchestrator | 2026-02-28 01:03:42 | INFO  | Task 9c9c3702-3f62-49d3-a5e2-56d961667a99 is in state STARTED 2026-02-28 01:03:42.448812 | orchestrator | 2026-02-28 01:03:42 | INFO  | Task 7412b3b9-a78a-42fe-89ac-55c2d9ab9531 is in state STARTED 2026-02-28 01:03:42.448818 | orchestrator | 2026-02-28 01:03:42 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:03:45.467282 | orchestrator | 2026-02-28 01:03:45 | INFO  | Task e07a3aae-281f-4024-8b8b-21e92e17139a is in state STARTED 2026-02-28 01:03:45.468869 | orchestrator | 2026-02-28 01:03:45 | INFO  | Task b994309c-c798-418b-a4f7-87a36b92ec01 is in state STARTED 2026-02-28 01:03:45.471178 | orchestrator | 2026-02-28 01:03:45 | INFO  | Task ade46703-9859-4d9b-9981-77970d56ece4 is in state STARTED 2026-02-28 01:03:45.473346 | orchestrator | 2026-02-28 01:03:45 | INFO  | Task 9c9c3702-3f62-49d3-a5e2-56d961667a99 is in state STARTED 2026-02-28 01:03:45.474711 | orchestrator | 2026-02-28 01:03:45 | INFO  | Task 7412b3b9-a78a-42fe-89ac-55c2d9ab9531 is in state STARTED 2026-02-28 01:03:45.475101 | orchestrator | 2026-02-28 01:03:45 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:03:48.521610 | orchestrator | 2026-02-28 01:03:48 | INFO  | Task e07a3aae-281f-4024-8b8b-21e92e17139a is in state STARTED 2026-02-28 01:03:48.525475 | orchestrator | 2026-02-28 01:03:48 | INFO  | Task b994309c-c798-418b-a4f7-87a36b92ec01 is in state STARTED 2026-02-28 01:03:48.526654 | orchestrator | 2026-02-28 01:03:48 | INFO  | Task ade46703-9859-4d9b-9981-77970d56ece4 is in state STARTED 2026-02-28 01:03:48.528019 | orchestrator | 2026-02-28 01:03:48 | INFO  | Task 9c9c3702-3f62-49d3-a5e2-56d961667a99 is in state STARTED 2026-02-28 01:03:48.530858 | orchestrator | 2026-02-28 01:03:48 | INFO  | Task 7412b3b9-a78a-42fe-89ac-55c2d9ab9531 is in state STARTED 2026-02-28 01:03:48.530922 | orchestrator | 2026-02-28 01:03:48 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:03:51.561704 | orchestrator | 2026-02-28 01:03:51 | INFO  | Task e07a3aae-281f-4024-8b8b-21e92e17139a is in state STARTED 2026-02-28 01:03:51.562676 | orchestrator | 2026-02-28 01:03:51 | INFO  | Task b994309c-c798-418b-a4f7-87a36b92ec01 is in state STARTED 2026-02-28 01:03:51.564003 | orchestrator | 2026-02-28 01:03:51 | INFO  | Task ade46703-9859-4d9b-9981-77970d56ece4 is in state STARTED 2026-02-28 01:03:51.565529 | orchestrator | 2026-02-28 01:03:51 | INFO  | Task 9c9c3702-3f62-49d3-a5e2-56d961667a99 is in state STARTED 2026-02-28 01:03:51.566627 | orchestrator | 2026-02-28 01:03:51 | INFO  | Task 7412b3b9-a78a-42fe-89ac-55c2d9ab9531 is in state STARTED 2026-02-28 01:03:51.566690 | orchestrator | 2026-02-28 01:03:51 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:03:54.628337 | orchestrator | 2026-02-28 01:03:54 | INFO  | Task e07a3aae-281f-4024-8b8b-21e92e17139a is in state STARTED 2026-02-28 01:03:54.630965 | orchestrator | 2026-02-28 01:03:54 | INFO  | Task c9bc6f62-00b4-4207-b064-516647d36890 is in state STARTED 2026-02-28 01:03:54.633284 | orchestrator | 2026-02-28 01:03:54 | INFO  | Task b994309c-c798-418b-a4f7-87a36b92ec01 is in state STARTED 2026-02-28 01:03:54.637849 | orchestrator | 2026-02-28 01:03:54 | INFO  | Task ade46703-9859-4d9b-9981-77970d56ece4 is in state SUCCESS 2026-02-28 01:03:54.639968 | orchestrator | 2026-02-28 01:03:54.640050 | orchestrator | 2026-02-28 01:03:54.640077 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-28 01:03:54.640085 | orchestrator | 2026-02-28 01:03:54.640092 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-28 01:03:54.640100 | orchestrator | Saturday 28 February 2026 01:03:17 +0000 (0:00:00.218) 0:00:00.218 ***** 2026-02-28 01:03:54.640107 | orchestrator | ok: [testbed-node-0] 2026-02-28 01:03:54.640115 | orchestrator | ok: [testbed-node-1] 2026-02-28 01:03:54.640122 | orchestrator | ok: [testbed-node-2] 2026-02-28 01:03:54.640128 | orchestrator | 2026-02-28 01:03:54.640135 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-28 01:03:54.640142 | orchestrator | Saturday 28 February 2026 01:03:17 +0000 (0:00:00.354) 0:00:00.573 ***** 2026-02-28 01:03:54.640149 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2026-02-28 01:03:54.640176 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2026-02-28 01:03:54.640183 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2026-02-28 01:03:54.640190 | orchestrator | 2026-02-28 01:03:54.640196 | orchestrator | PLAY [Wait for the Keystone service] ******************************************* 2026-02-28 01:03:54.640203 | orchestrator | 2026-02-28 01:03:54.640209 | orchestrator | TASK [Waiting for Keystone public port to be UP] ******************************* 2026-02-28 01:03:54.640216 | orchestrator | Saturday 28 February 2026 01:03:18 +0000 (0:00:00.913) 0:00:01.486 ***** 2026-02-28 01:03:54.640223 | orchestrator | ok: [testbed-node-0] 2026-02-28 01:03:54.640229 | orchestrator | ok: [testbed-node-1] 2026-02-28 01:03:54.640235 | orchestrator | ok: [testbed-node-2] 2026-02-28 01:03:54.640242 | orchestrator | 2026-02-28 01:03:54.640248 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-28 01:03:54.640256 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-28 01:03:54.640265 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-28 01:03:54.640283 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-28 01:03:54.640289 | orchestrator | 2026-02-28 01:03:54.640296 | orchestrator | 2026-02-28 01:03:54.640303 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-28 01:03:54.640309 | orchestrator | Saturday 28 February 2026 01:03:27 +0000 (0:00:08.825) 0:00:10.312 ***** 2026-02-28 01:03:54.640315 | orchestrator | =============================================================================== 2026-02-28 01:03:54.640322 | orchestrator | Waiting for Keystone public port to be UP ------------------------------- 8.83s 2026-02-28 01:03:54.640328 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.91s 2026-02-28 01:03:54.640334 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.35s 2026-02-28 01:03:54.640340 | orchestrator | 2026-02-28 01:03:54.640347 | orchestrator | 2026-02-28 01:03:54.640353 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-28 01:03:54.640359 | orchestrator | 2026-02-28 01:03:54.640366 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-28 01:03:54.640372 | orchestrator | Saturday 28 February 2026 01:01:17 +0000 (0:00:00.314) 0:00:00.314 ***** 2026-02-28 01:03:54.640378 | orchestrator | ok: [testbed-node-0] 2026-02-28 01:03:54.640385 | orchestrator | ok: [testbed-node-1] 2026-02-28 01:03:54.640391 | orchestrator | ok: [testbed-node-2] 2026-02-28 01:03:54.640397 | orchestrator | 2026-02-28 01:03:54.640404 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-28 01:03:54.640410 | orchestrator | Saturday 28 February 2026 01:01:17 +0000 (0:00:00.316) 0:00:00.630 ***** 2026-02-28 01:03:54.640416 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2026-02-28 01:03:54.640423 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2026-02-28 01:03:54.640429 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2026-02-28 01:03:54.640436 | orchestrator | 2026-02-28 01:03:54.640442 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2026-02-28 01:03:54.640449 | orchestrator | 2026-02-28 01:03:54.640455 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-02-28 01:03:54.640461 | orchestrator | Saturday 28 February 2026 01:01:18 +0000 (0:00:00.592) 0:00:01.222 ***** 2026-02-28 01:03:54.640468 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 01:03:54.640474 | orchestrator | 2026-02-28 01:03:54.640480 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2026-02-28 01:03:54.640487 | orchestrator | Saturday 28 February 2026 01:01:19 +0000 (0:00:00.731) 0:00:01.954 ***** 2026-02-28 01:03:54.640519 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-02-28 01:03:54.640534 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-02-28 01:03:54.640554 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-02-28 01:03:54.640574 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-28 01:03:54.640967 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-28 01:03:54.640995 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-28 01:03:54.641003 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-28 01:03:54.641010 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-28 01:03:54.641023 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-28 01:03:54.641030 | orchestrator | 2026-02-28 01:03:54.641041 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2026-02-28 01:03:54.641053 | orchestrator | Saturday 28 February 2026 01:01:20 +0000 (0:00:01.795) 0:00:03.750 ***** 2026-02-28 01:03:54.641064 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:03:54.641075 | orchestrator | 2026-02-28 01:03:54.641086 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2026-02-28 01:03:54.641095 | orchestrator | Saturday 28 February 2026 01:01:21 +0000 (0:00:00.154) 0:00:03.904 ***** 2026-02-28 01:03:54.641106 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:03:54.641117 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:03:54.641127 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:03:54.641139 | orchestrator | 2026-02-28 01:03:54.641173 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2026-02-28 01:03:54.641185 | orchestrator | Saturday 28 February 2026 01:01:21 +0000 (0:00:00.480) 0:00:04.384 ***** 2026-02-28 01:03:54.641196 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-28 01:03:54.641215 | orchestrator | 2026-02-28 01:03:54.641225 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-02-28 01:03:54.641236 | orchestrator | Saturday 28 February 2026 01:01:22 +0000 (0:00:00.944) 0:00:05.329 ***** 2026-02-28 01:03:54.641246 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 01:03:54.641258 | orchestrator | 2026-02-28 01:03:54.641269 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2026-02-28 01:03:54.641280 | orchestrator | Saturday 28 February 2026 01:01:22 +0000 (0:00:00.546) 0:00:05.875 ***** 2026-02-28 01:03:54.641301 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-02-28 01:03:54.641316 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-02-28 01:03:54.641334 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-02-28 01:03:54.641347 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-28 01:03:54.641585 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-28 01:03:54.641604 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-28 01:03:54.641670 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-28 01:03:54.641685 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-28 01:03:54.641704 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-28 01:03:54.641715 | orchestrator | 2026-02-28 01:03:54.641726 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2026-02-28 01:03:54.641737 | orchestrator | Saturday 28 February 2026 01:01:26 +0000 (0:00:03.686) 0:00:09.562 ***** 2026-02-28 01:03:54.641749 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-02-28 01:03:54.641768 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-28 01:03:54.641785 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-28 01:03:54.641795 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:03:54.641804 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-02-28 01:03:54.641818 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-28 01:03:54.641828 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-28 01:03:54.641844 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:03:54.641855 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-02-28 01:03:54.641872 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-28 01:03:54.641883 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-28 01:03:54.641893 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:03:54.641904 | orchestrator | 2026-02-28 01:03:54.641914 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2026-02-28 01:03:54.641924 | orchestrator | Saturday 28 February 2026 01:01:27 +0000 (0:00:00.660) 0:00:10.223 ***** 2026-02-28 01:03:54.641939 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-02-28 01:03:54.641958 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-28 01:03:54.641968 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-28 01:03:54.641977 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:03:54.641994 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-02-28 01:03:54.642071 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-28 01:03:54.642094 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-02-28 01:03:54.642117 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-28 01:03:54.642128 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:03:54.642140 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-28 01:03:54.642151 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-28 01:03:54.642162 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:03:54.642173 | orchestrator | 2026-02-28 01:03:54.642184 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2026-02-28 01:03:54.642195 | orchestrator | Saturday 28 February 2026 01:01:28 +0000 (0:00:00.965) 0:00:11.188 ***** 2026-02-28 01:03:54.642215 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-02-28 01:03:54.642229 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-02-28 01:03:54.642243 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-02-28 01:03:54.642252 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-28 01:03:54.642264 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-28 01:03:54.642272 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-28 01:03:54.642280 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-28 01:03:54.642296 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-28 01:03:54.642304 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-28 01:03:54.642312 | orchestrator | 2026-02-28 01:03:54.642319 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2026-02-28 01:03:54.642327 | orchestrator | Saturday 28 February 2026 01:01:31 +0000 (0:00:03.486) 0:00:14.674 ***** 2026-02-28 01:03:54.642335 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-02-28 01:03:54.642349 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-02-28 01:03:54.642360 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-28 01:03:54.642371 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-28 01:03:54.642378 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-02-28 01:03:54.642385 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-28 01:03:54.642397 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-28 01:03:54.642404 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-28 01:03:54.642418 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-28 01:03:54.642425 | orchestrator | 2026-02-28 01:03:54.642436 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2026-02-28 01:03:54.642442 | orchestrator | Saturday 28 February 2026 01:01:37 +0000 (0:00:06.033) 0:00:20.708 ***** 2026-02-28 01:03:54.642466 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:03:54.642473 | orchestrator | changed: [testbed-node-1] 2026-02-28 01:03:54.642479 | orchestrator | changed: [testbed-node-2] 2026-02-28 01:03:54.642485 | orchestrator | 2026-02-28 01:03:54.642492 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2026-02-28 01:03:54.642498 | orchestrator | Saturday 28 February 2026 01:01:39 +0000 (0:00:01.715) 0:00:22.424 ***** 2026-02-28 01:03:54.642504 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:03:54.642511 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:03:54.642517 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:03:54.642524 | orchestrator | 2026-02-28 01:03:54.642530 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2026-02-28 01:03:54.642536 | orchestrator | Saturday 28 February 2026 01:01:40 +0000 (0:00:00.665) 0:00:23.090 ***** 2026-02-28 01:03:54.642543 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:03:54.642549 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:03:54.642556 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:03:54.642562 | orchestrator | 2026-02-28 01:03:54.642568 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2026-02-28 01:03:54.642575 | orchestrator | Saturday 28 February 2026 01:01:40 +0000 (0:00:00.344) 0:00:23.435 ***** 2026-02-28 01:03:54.642581 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:03:54.642588 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:03:54.642594 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:03:54.642600 | orchestrator | 2026-02-28 01:03:54.642607 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2026-02-28 01:03:54.642613 | orchestrator | Saturday 28 February 2026 01:01:41 +0000 (0:00:00.537) 0:00:23.973 ***** 2026-02-28 01:03:54.642620 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-02-28 01:03:54.642839 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-28 01:03:54.642861 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-02-28 01:03:54.642872 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-28 01:03:54.642879 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-28 01:03:54.642886 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-28 01:03:54.642893 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:03:54.642899 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:03:54.642913 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-02-28 01:03:54.642927 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-28 01:03:54.642938 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-28 01:03:54.642944 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:03:54.642951 | orchestrator | 2026-02-28 01:03:54.642957 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-02-28 01:03:54.642963 | orchestrator | Saturday 28 February 2026 01:01:41 +0000 (0:00:00.579) 0:00:24.552 ***** 2026-02-28 01:03:54.642970 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:03:54.642976 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:03:54.642982 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:03:54.642989 | orchestrator | 2026-02-28 01:03:54.642995 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2026-02-28 01:03:54.643002 | orchestrator | Saturday 28 February 2026 01:01:42 +0000 (0:00:00.337) 0:00:24.889 ***** 2026-02-28 01:03:54.643008 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-02-28 01:03:54.643015 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-02-28 01:03:54.643021 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-02-28 01:03:54.643028 | orchestrator | 2026-02-28 01:03:54.643034 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2026-02-28 01:03:54.643040 | orchestrator | Saturday 28 February 2026 01:01:43 +0000 (0:00:01.795) 0:00:26.684 ***** 2026-02-28 01:03:54.643047 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-28 01:03:54.643054 | orchestrator | 2026-02-28 01:03:54.643060 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2026-02-28 01:03:54.643066 | orchestrator | Saturday 28 February 2026 01:01:45 +0000 (0:00:01.217) 0:00:27.901 ***** 2026-02-28 01:03:54.643072 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:03:54.643079 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:03:54.643085 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:03:54.643091 | orchestrator | 2026-02-28 01:03:54.643098 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2026-02-28 01:03:54.643104 | orchestrator | Saturday 28 February 2026 01:01:46 +0000 (0:00:01.331) 0:00:29.233 ***** 2026-02-28 01:03:54.643115 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-28 01:03:54.643122 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-02-28 01:03:54.643128 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-02-28 01:03:54.643134 | orchestrator | 2026-02-28 01:03:54.643140 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2026-02-28 01:03:54.643147 | orchestrator | Saturday 28 February 2026 01:01:47 +0000 (0:00:01.527) 0:00:30.761 ***** 2026-02-28 01:03:54.643153 | orchestrator | ok: [testbed-node-0] 2026-02-28 01:03:54.643161 | orchestrator | ok: [testbed-node-1] 2026-02-28 01:03:54.643167 | orchestrator | ok: [testbed-node-2] 2026-02-28 01:03:54.643173 | orchestrator | 2026-02-28 01:03:54.643180 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2026-02-28 01:03:54.643186 | orchestrator | Saturday 28 February 2026 01:01:48 +0000 (0:00:00.467) 0:00:31.229 ***** 2026-02-28 01:03:54.643192 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-02-28 01:03:54.643198 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-02-28 01:03:54.643205 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-02-28 01:03:54.643211 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-02-28 01:03:54.643217 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-02-28 01:03:54.643224 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-02-28 01:03:54.643234 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-02-28 01:03:54.643241 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-02-28 01:03:54.643247 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-02-28 01:03:54.643253 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-02-28 01:03:54.643260 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-02-28 01:03:54.643266 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-02-28 01:03:54.643272 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-02-28 01:03:54.643279 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-02-28 01:03:54.643285 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-02-28 01:03:54.643291 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-02-28 01:03:54.643297 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-02-28 01:03:54.643304 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-02-28 01:03:54.643310 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-02-28 01:03:54.643316 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-02-28 01:03:54.643326 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-02-28 01:03:54.643333 | orchestrator | 2026-02-28 01:03:54.643339 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2026-02-28 01:03:54.643345 | orchestrator | Saturday 28 February 2026 01:01:58 +0000 (0:00:09.675) 0:00:40.904 ***** 2026-02-28 01:03:54.643351 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-02-28 01:03:54.643358 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-02-28 01:03:54.643368 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-02-28 01:03:54.643375 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-02-28 01:03:54.643381 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-02-28 01:03:54.643388 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-02-28 01:03:54.643394 | orchestrator | 2026-02-28 01:03:54.643400 | orchestrator | TASK [service-check-containers : keystone | Check containers] ****************** 2026-02-28 01:03:54.643406 | orchestrator | Saturday 28 February 2026 01:02:01 +0000 (0:00:03.084) 0:00:43.989 ***** 2026-02-28 01:03:54.643414 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-02-28 01:03:54.643425 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-02-28 01:03:54.643436 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-02-28 01:03:54.643450 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-28 01:03:54.643457 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-28 01:03:54.643464 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-28 01:03:54.643471 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-28 01:03:54.643482 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-28 01:03:54.643489 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-28 01:03:54.643496 | orchestrator | 2026-02-28 01:03:54.643502 | orchestrator | TASK [service-check-containers : keystone | Notify handlers to restart containers] *** 2026-02-28 01:03:54.643509 | orchestrator | Saturday 28 February 2026 01:02:03 +0000 (0:00:02.481) 0:00:46.470 ***** 2026-02-28 01:03:54.643519 | orchestrator | changed: [testbed-node-0] => { 2026-02-28 01:03:54.643526 | orchestrator |  "msg": "Notifying handlers" 2026-02-28 01:03:54.643532 | orchestrator | } 2026-02-28 01:03:54.643539 | orchestrator | changed: [testbed-node-1] => { 2026-02-28 01:03:54.643545 | orchestrator |  "msg": "Notifying handlers" 2026-02-28 01:03:54.643552 | orchestrator | } 2026-02-28 01:03:54.643558 | orchestrator | changed: [testbed-node-2] => { 2026-02-28 01:03:54.643568 | orchestrator |  "msg": "Notifying handlers" 2026-02-28 01:03:54.643574 | orchestrator | } 2026-02-28 01:03:54.643581 | orchestrator | 2026-02-28 01:03:54.643587 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-28 01:03:54.643594 | orchestrator | Saturday 28 February 2026 01:02:03 +0000 (0:00:00.343) 0:00:46.814 ***** 2026-02-28 01:03:54.643600 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-02-28 01:03:54.643608 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-28 01:03:54.643614 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-28 01:03:54.643621 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:03:54.643741 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-02-28 01:03:54.643762 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-28 01:03:54.643769 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-28 01:03:54.643776 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:03:54.643783 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-02-28 01:03:54.643790 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-28 01:03:54.643800 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-28 01:03:54.643806 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:03:54.643812 | orchestrator | 2026-02-28 01:03:54.643824 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-02-28 01:03:54.643829 | orchestrator | Saturday 28 February 2026 01:02:04 +0000 (0:00:00.978) 0:00:47.793 ***** 2026-02-28 01:03:54.643835 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:03:54.643841 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:03:54.643846 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:03:54.643852 | orchestrator | 2026-02-28 01:03:54.643857 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2026-02-28 01:03:54.643863 | orchestrator | Saturday 28 February 2026 01:02:05 +0000 (0:00:00.351) 0:00:48.144 ***** 2026-02-28 01:03:54.643868 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:03:54.643874 | orchestrator | 2026-02-28 01:03:54.643879 | orchestrator | TASK [keystone : Creating Keystone database user and setting permissions] ****** 2026-02-28 01:03:54.643885 | orchestrator | Saturday 28 February 2026 01:02:07 +0000 (0:00:02.439) 0:00:50.584 ***** 2026-02-28 01:03:54.643890 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:03:54.643896 | orchestrator | 2026-02-28 01:03:54.643901 | orchestrator | TASK [keystone : Checking for any running keystone_fernet containers] ********** 2026-02-28 01:03:54.643907 | orchestrator | Saturday 28 February 2026 01:02:10 +0000 (0:00:02.331) 0:00:52.916 ***** 2026-02-28 01:03:54.643912 | orchestrator | ok: [testbed-node-1] 2026-02-28 01:03:54.643918 | orchestrator | ok: [testbed-node-0] 2026-02-28 01:03:54.643923 | orchestrator | ok: [testbed-node-2] 2026-02-28 01:03:54.643929 | orchestrator | 2026-02-28 01:03:54.643938 | orchestrator | TASK [keystone : Group nodes where keystone_fernet is running] ***************** 2026-02-28 01:03:54.643944 | orchestrator | Saturday 28 February 2026 01:02:11 +0000 (0:00:01.051) 0:00:53.968 ***** 2026-02-28 01:03:54.643950 | orchestrator | ok: [testbed-node-0] 2026-02-28 01:03:54.643955 | orchestrator | ok: [testbed-node-1] 2026-02-28 01:03:54.643961 | orchestrator | ok: [testbed-node-2] 2026-02-28 01:03:54.643966 | orchestrator | 2026-02-28 01:03:54.643972 | orchestrator | TASK [keystone : Fail if any hosts need bootstrapping and not all hosts targeted] *** 2026-02-28 01:03:54.643977 | orchestrator | Saturday 28 February 2026 01:02:11 +0000 (0:00:00.339) 0:00:54.307 ***** 2026-02-28 01:03:54.643983 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:03:54.643988 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:03:54.643994 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:03:54.644000 | orchestrator | 2026-02-28 01:03:54.644005 | orchestrator | TASK [keystone : Running Keystone bootstrap container] ************************* 2026-02-28 01:03:54.644011 | orchestrator | Saturday 28 February 2026 01:02:12 +0000 (0:00:00.589) 0:00:54.897 ***** 2026-02-28 01:03:54.644016 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:03:54.644022 | orchestrator | 2026-02-28 01:03:54.644027 | orchestrator | TASK [keystone : Running Keystone fernet bootstrap container] ****************** 2026-02-28 01:03:54.644032 | orchestrator | Saturday 28 February 2026 01:02:27 +0000 (0:00:15.823) 0:01:10.720 ***** 2026-02-28 01:03:54.644038 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:03:54.644044 | orchestrator | 2026-02-28 01:03:54.644049 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-02-28 01:03:54.644054 | orchestrator | Saturday 28 February 2026 01:02:39 +0000 (0:00:11.273) 0:01:21.994 ***** 2026-02-28 01:03:54.644060 | orchestrator | 2026-02-28 01:03:54.644065 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-02-28 01:03:54.644071 | orchestrator | Saturday 28 February 2026 01:02:39 +0000 (0:00:00.079) 0:01:22.074 ***** 2026-02-28 01:03:54.644076 | orchestrator | 2026-02-28 01:03:54.644082 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-02-28 01:03:54.644087 | orchestrator | Saturday 28 February 2026 01:02:39 +0000 (0:00:00.090) 0:01:22.164 ***** 2026-02-28 01:03:54.644093 | orchestrator | 2026-02-28 01:03:54.644098 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-ssh container] ******************** 2026-02-28 01:03:54.644104 | orchestrator | Saturday 28 February 2026 01:02:39 +0000 (0:00:00.118) 0:01:22.283 ***** 2026-02-28 01:03:54.644109 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:03:54.644115 | orchestrator | changed: [testbed-node-1] 2026-02-28 01:03:54.644124 | orchestrator | changed: [testbed-node-2] 2026-02-28 01:03:54.644130 | orchestrator | 2026-02-28 01:03:54.644135 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-fernet container] ***************** 2026-02-28 01:03:54.644141 | orchestrator | Saturday 28 February 2026 01:03:02 +0000 (0:00:22.650) 0:01:44.933 ***** 2026-02-28 01:03:54.644146 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:03:54.644152 | orchestrator | changed: [testbed-node-1] 2026-02-28 01:03:54.644157 | orchestrator | changed: [testbed-node-2] 2026-02-28 01:03:54.644163 | orchestrator | 2026-02-28 01:03:54.644168 | orchestrator | RUNNING HANDLER [keystone : Restart keystone container] ************************ 2026-02-28 01:03:54.644174 | orchestrator | Saturday 28 February 2026 01:03:12 +0000 (0:00:10.476) 0:01:55.410 ***** 2026-02-28 01:03:54.644179 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:03:54.644185 | orchestrator | changed: [testbed-node-2] 2026-02-28 01:03:54.644190 | orchestrator | changed: [testbed-node-1] 2026-02-28 01:03:54.644196 | orchestrator | 2026-02-28 01:03:54.644201 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-02-28 01:03:54.644207 | orchestrator | Saturday 28 February 2026 01:03:19 +0000 (0:00:07.167) 0:02:02.577 ***** 2026-02-28 01:03:54.644213 | orchestrator | included: /ansible/roles/keystone/tasks/distribute_fernet.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 01:03:54.644219 | orchestrator | 2026-02-28 01:03:54.644224 | orchestrator | TASK [keystone : Waiting for Keystone SSH port to be UP] *********************** 2026-02-28 01:03:54.644230 | orchestrator | Saturday 28 February 2026 01:03:20 +0000 (0:00:00.679) 0:02:03.256 ***** 2026-02-28 01:03:54.644236 | orchestrator | ok: [testbed-node-0] 2026-02-28 01:03:54.644244 | orchestrator | ok: [testbed-node-1] 2026-02-28 01:03:54.644250 | orchestrator | ok: [testbed-node-2] 2026-02-28 01:03:54.644255 | orchestrator | 2026-02-28 01:03:54.644261 | orchestrator | TASK [keystone : Run key distribution] ***************************************** 2026-02-28 01:03:54.644266 | orchestrator | Saturday 28 February 2026 01:03:22 +0000 (0:00:02.055) 0:02:05.311 ***** 2026-02-28 01:03:54.644272 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:03:54.644277 | orchestrator | 2026-02-28 01:03:54.644283 | orchestrator | TASK [keystone : Creating admin project, user, role, service, and endpoint] **** 2026-02-28 01:03:54.644288 | orchestrator | Saturday 28 February 2026 01:03:24 +0000 (0:00:02.160) 0:02:07.472 ***** 2026-02-28 01:03:54.644294 | orchestrator | changed: [testbed-node-0] => (item=RegionOne) 2026-02-28 01:03:54.644300 | orchestrator | 2026-02-28 01:03:54.644305 | orchestrator | TASK [service-ks-register : keystone | Creating/deleting services] ************* 2026-02-28 01:03:54.644311 | orchestrator | Saturday 28 February 2026 01:03:37 +0000 (0:00:12.539) 0:02:20.011 ***** 2026-02-28 01:03:54.644316 | orchestrator | changed: [testbed-node-0] => (item=keystone (identity)) 2026-02-28 01:03:54.644322 | orchestrator | 2026-02-28 01:03:54.644328 | orchestrator | TASK [service-ks-register : keystone | Creating/deleting endpoints] ************ 2026-02-28 01:03:54.644333 | orchestrator | Saturday 28 February 2026 01:03:41 +0000 (0:00:04.455) 0:02:24.467 ***** 2026-02-28 01:03:54.644339 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api-int.testbed.osism.xyz:5000 -> internal) 2026-02-28 01:03:54.644344 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api.testbed.osism.xyz:5000 -> public) 2026-02-28 01:03:54.644350 | orchestrator | 2026-02-28 01:03:54.644355 | orchestrator | TASK [service-ks-register : keystone | Creating projects] ********************** 2026-02-28 01:03:54.644361 | orchestrator | Saturday 28 February 2026 01:03:48 +0000 (0:00:06.929) 0:02:31.397 ***** 2026-02-28 01:03:54.644366 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:03:54.644372 | orchestrator | 2026-02-28 01:03:54.644377 | orchestrator | TASK [service-ks-register : keystone | Creating users] ************************* 2026-02-28 01:03:54.644386 | orchestrator | Saturday 28 February 2026 01:03:48 +0000 (0:00:00.146) 0:02:31.544 ***** 2026-02-28 01:03:54.644392 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:03:54.644397 | orchestrator | 2026-02-28 01:03:54.644403 | orchestrator | TASK [service-ks-register : keystone | Creating roles] ************************* 2026-02-28 01:03:54.644412 | orchestrator | Saturday 28 February 2026 01:03:48 +0000 (0:00:00.121) 0:02:31.665 ***** 2026-02-28 01:03:54.644418 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:03:54.644423 | orchestrator | 2026-02-28 01:03:54.644429 | orchestrator | TASK [service-ks-register : keystone | Granting/revoking user roles] *********** 2026-02-28 01:03:54.644434 | orchestrator | Saturday 28 February 2026 01:03:48 +0000 (0:00:00.102) 0:02:31.768 ***** 2026-02-28 01:03:54.644440 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:03:54.644445 | orchestrator | 2026-02-28 01:03:54.644451 | orchestrator | TASK [keystone : Creating default user role] *********************************** 2026-02-28 01:03:54.644457 | orchestrator | Saturday 28 February 2026 01:03:49 +0000 (0:00:00.309) 0:02:32.077 ***** 2026-02-28 01:03:54.644462 | orchestrator | ok: [testbed-node-0] 2026-02-28 01:03:54.644468 | orchestrator | 2026-02-28 01:03:54.644473 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-02-28 01:03:54.644479 | orchestrator | Saturday 28 February 2026 01:03:52 +0000 (0:00:03.113) 0:02:35.190 ***** 2026-02-28 01:03:54.644484 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:03:54.644490 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:03:54.644495 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:03:54.644501 | orchestrator | 2026-02-28 01:03:54.644507 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-28 01:03:54.644512 | orchestrator | testbed-node-0 : ok=34  changed=20  unreachable=0 failed=0 skipped=18  rescued=0 ignored=0 2026-02-28 01:03:54.644519 | orchestrator | testbed-node-1 : ok=23  changed=13  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2026-02-28 01:03:54.644524 | orchestrator | testbed-node-2 : ok=23  changed=13  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2026-02-28 01:03:54.644530 | orchestrator | 2026-02-28 01:03:54.644535 | orchestrator | 2026-02-28 01:03:54.644541 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-28 01:03:54.644547 | orchestrator | Saturday 28 February 2026 01:03:52 +0000 (0:00:00.400) 0:02:35.591 ***** 2026-02-28 01:03:54.644552 | orchestrator | =============================================================================== 2026-02-28 01:03:54.644558 | orchestrator | keystone : Restart keystone-ssh container ------------------------------ 22.65s 2026-02-28 01:03:54.644563 | orchestrator | keystone : Running Keystone bootstrap container ------------------------ 15.82s 2026-02-28 01:03:54.644570 | orchestrator | keystone : Creating admin project, user, role, service, and endpoint --- 12.54s 2026-02-28 01:03:54.644580 | orchestrator | keystone : Running Keystone fernet bootstrap container ----------------- 11.27s 2026-02-28 01:03:54.644590 | orchestrator | keystone : Restart keystone-fernet container --------------------------- 10.48s 2026-02-28 01:03:54.644600 | orchestrator | keystone : Copying files for keystone-fernet ---------------------------- 9.68s 2026-02-28 01:03:54.644609 | orchestrator | keystone : Restart keystone container ----------------------------------- 7.17s 2026-02-28 01:03:54.644618 | orchestrator | service-ks-register : keystone | Creating/deleting endpoints ------------ 6.93s 2026-02-28 01:03:54.644626 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 6.03s 2026-02-28 01:03:54.644651 | orchestrator | service-ks-register : keystone | Creating/deleting services ------------- 4.46s 2026-02-28 01:03:54.644660 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 3.69s 2026-02-28 01:03:54.644675 | orchestrator | keystone : Copying over config.json files for services ------------------ 3.49s 2026-02-28 01:03:54.644684 | orchestrator | keystone : Creating default user role ----------------------------------- 3.11s 2026-02-28 01:03:54.644692 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 3.08s 2026-02-28 01:03:54.644701 | orchestrator | service-check-containers : keystone | Check containers ------------------ 2.48s 2026-02-28 01:03:54.644710 | orchestrator | keystone : Creating keystone database ----------------------------------- 2.44s 2026-02-28 01:03:54.644724 | orchestrator | keystone : Creating Keystone database user and setting permissions ------ 2.33s 2026-02-28 01:03:54.644732 | orchestrator | keystone : Run key distribution ----------------------------------------- 2.16s 2026-02-28 01:03:54.644740 | orchestrator | keystone : Waiting for Keystone SSH port to be UP ----------------------- 2.06s 2026-02-28 01:03:54.644749 | orchestrator | keystone : Ensuring config directories exist ---------------------------- 1.80s 2026-02-28 01:03:54.644758 | orchestrator | 2026-02-28 01:03:54 | INFO  | Task 9c9c3702-3f62-49d3-a5e2-56d961667a99 is in state STARTED 2026-02-28 01:03:54.644765 | orchestrator | 2026-02-28 01:03:54 | INFO  | Task 7412b3b9-a78a-42fe-89ac-55c2d9ab9531 is in state STARTED 2026-02-28 01:03:54.644773 | orchestrator | 2026-02-28 01:03:54 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:03:57.691412 | orchestrator | 2026-02-28 01:03:57 | INFO  | Task e07a3aae-281f-4024-8b8b-21e92e17139a is in state STARTED 2026-02-28 01:03:57.693841 | orchestrator | 2026-02-28 01:03:57 | INFO  | Task c9bc6f62-00b4-4207-b064-516647d36890 is in state STARTED 2026-02-28 01:03:57.697144 | orchestrator | 2026-02-28 01:03:57 | INFO  | Task b994309c-c798-418b-a4f7-87a36b92ec01 is in state STARTED 2026-02-28 01:03:57.700539 | orchestrator | 2026-02-28 01:03:57 | INFO  | Task 9c9c3702-3f62-49d3-a5e2-56d961667a99 is in state STARTED 2026-02-28 01:03:57.703481 | orchestrator | 2026-02-28 01:03:57 | INFO  | Task 7412b3b9-a78a-42fe-89ac-55c2d9ab9531 is in state STARTED 2026-02-28 01:03:57.703802 | orchestrator | 2026-02-28 01:03:57 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:04:00.749768 | orchestrator | 2026-02-28 01:04:00 | INFO  | Task e07a3aae-281f-4024-8b8b-21e92e17139a is in state STARTED 2026-02-28 01:04:00.751607 | orchestrator | 2026-02-28 01:04:00 | INFO  | Task c9bc6f62-00b4-4207-b064-516647d36890 is in state STARTED 2026-02-28 01:04:00.753526 | orchestrator | 2026-02-28 01:04:00 | INFO  | Task b994309c-c798-418b-a4f7-87a36b92ec01 is in state STARTED 2026-02-28 01:04:00.755502 | orchestrator | 2026-02-28 01:04:00 | INFO  | Task 9c9c3702-3f62-49d3-a5e2-56d961667a99 is in state STARTED 2026-02-28 01:04:00.757403 | orchestrator | 2026-02-28 01:04:00 | INFO  | Task 7412b3b9-a78a-42fe-89ac-55c2d9ab9531 is in state STARTED 2026-02-28 01:04:00.757451 | orchestrator | 2026-02-28 01:04:00 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:04:03.807006 | orchestrator | 2026-02-28 01:04:03 | INFO  | Task e07a3aae-281f-4024-8b8b-21e92e17139a is in state STARTED 2026-02-28 01:04:03.808256 | orchestrator | 2026-02-28 01:04:03 | INFO  | Task c9bc6f62-00b4-4207-b064-516647d36890 is in state STARTED 2026-02-28 01:04:03.809591 | orchestrator | 2026-02-28 01:04:03 | INFO  | Task b994309c-c798-418b-a4f7-87a36b92ec01 is in state STARTED 2026-02-28 01:04:03.811001 | orchestrator | 2026-02-28 01:04:03 | INFO  | Task 9c9c3702-3f62-49d3-a5e2-56d961667a99 is in state STARTED 2026-02-28 01:04:03.812244 | orchestrator | 2026-02-28 01:04:03 | INFO  | Task 7412b3b9-a78a-42fe-89ac-55c2d9ab9531 is in state STARTED 2026-02-28 01:04:03.812536 | orchestrator | 2026-02-28 01:04:03 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:04:06.851193 | orchestrator | 2026-02-28 01:04:06 | INFO  | Task e07a3aae-281f-4024-8b8b-21e92e17139a is in state STARTED 2026-02-28 01:04:06.851404 | orchestrator | 2026-02-28 01:04:06 | INFO  | Task c9bc6f62-00b4-4207-b064-516647d36890 is in state STARTED 2026-02-28 01:04:06.852595 | orchestrator | 2026-02-28 01:04:06 | INFO  | Task b994309c-c798-418b-a4f7-87a36b92ec01 is in state STARTED 2026-02-28 01:04:06.853136 | orchestrator | 2026-02-28 01:04:06 | INFO  | Task 9c9c3702-3f62-49d3-a5e2-56d961667a99 is in state STARTED 2026-02-28 01:04:06.853898 | orchestrator | 2026-02-28 01:04:06 | INFO  | Task 7412b3b9-a78a-42fe-89ac-55c2d9ab9531 is in state STARTED 2026-02-28 01:04:06.854083 | orchestrator | 2026-02-28 01:04:06 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:04:09.874260 | orchestrator | 2026-02-28 01:04:09 | INFO  | Task e07a3aae-281f-4024-8b8b-21e92e17139a is in state STARTED 2026-02-28 01:04:09.874498 | orchestrator | 2026-02-28 01:04:09 | INFO  | Task c9bc6f62-00b4-4207-b064-516647d36890 is in state STARTED 2026-02-28 01:04:09.875319 | orchestrator | 2026-02-28 01:04:09 | INFO  | Task b994309c-c798-418b-a4f7-87a36b92ec01 is in state STARTED 2026-02-28 01:04:09.875940 | orchestrator | 2026-02-28 01:04:09 | INFO  | Task 9c9c3702-3f62-49d3-a5e2-56d961667a99 is in state STARTED 2026-02-28 01:04:09.876477 | orchestrator | 2026-02-28 01:04:09 | INFO  | Task 7412b3b9-a78a-42fe-89ac-55c2d9ab9531 is in state STARTED 2026-02-28 01:04:09.876509 | orchestrator | 2026-02-28 01:04:09 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:04:12.901029 | orchestrator | 2026-02-28 01:04:12 | INFO  | Task e07a3aae-281f-4024-8b8b-21e92e17139a is in state STARTED 2026-02-28 01:04:12.901699 | orchestrator | 2026-02-28 01:04:12 | INFO  | Task c9bc6f62-00b4-4207-b064-516647d36890 is in state STARTED 2026-02-28 01:04:12.902315 | orchestrator | 2026-02-28 01:04:12 | INFO  | Task b994309c-c798-418b-a4f7-87a36b92ec01 is in state STARTED 2026-02-28 01:04:12.903335 | orchestrator | 2026-02-28 01:04:12 | INFO  | Task 9c9c3702-3f62-49d3-a5e2-56d961667a99 is in state SUCCESS 2026-02-28 01:04:12.903996 | orchestrator | 2026-02-28 01:04:12 | INFO  | Task 7412b3b9-a78a-42fe-89ac-55c2d9ab9531 is in state STARTED 2026-02-28 01:04:12.904123 | orchestrator | 2026-02-28 01:04:12 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:04:15.942912 | orchestrator | 2026-02-28 01:04:15 | INFO  | Task e07a3aae-281f-4024-8b8b-21e92e17139a is in state STARTED 2026-02-28 01:04:15.943818 | orchestrator | 2026-02-28 01:04:15 | INFO  | Task c9bc6f62-00b4-4207-b064-516647d36890 is in state STARTED 2026-02-28 01:04:15.945844 | orchestrator | 2026-02-28 01:04:15 | INFO  | Task b994309c-c798-418b-a4f7-87a36b92ec01 is in state STARTED 2026-02-28 01:04:15.946781 | orchestrator | 2026-02-28 01:04:15 | INFO  | Task 7412b3b9-a78a-42fe-89ac-55c2d9ab9531 is in state STARTED 2026-02-28 01:04:15.948103 | orchestrator | 2026-02-28 01:04:15 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:04:15.948138 | orchestrator | 2026-02-28 01:04:15 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:04:18.990782 | orchestrator | 2026-02-28 01:04:18 | INFO  | Task e07a3aae-281f-4024-8b8b-21e92e17139a is in state STARTED 2026-02-28 01:04:18.993339 | orchestrator | 2026-02-28 01:04:18 | INFO  | Task c9bc6f62-00b4-4207-b064-516647d36890 is in state STARTED 2026-02-28 01:04:18.995108 | orchestrator | 2026-02-28 01:04:18 | INFO  | Task b994309c-c798-418b-a4f7-87a36b92ec01 is in state STARTED 2026-02-28 01:04:18.997122 | orchestrator | 2026-02-28 01:04:18 | INFO  | Task 7412b3b9-a78a-42fe-89ac-55c2d9ab9531 is in state STARTED 2026-02-28 01:04:18.998474 | orchestrator | 2026-02-28 01:04:18 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:04:18.998537 | orchestrator | 2026-02-28 01:04:18 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:04:22.085761 | orchestrator | 2026-02-28 01:04:22 | INFO  | Task e07a3aae-281f-4024-8b8b-21e92e17139a is in state STARTED 2026-02-28 01:04:22.086265 | orchestrator | 2026-02-28 01:04:22 | INFO  | Task c9bc6f62-00b4-4207-b064-516647d36890 is in state STARTED 2026-02-28 01:04:22.087341 | orchestrator | 2026-02-28 01:04:22 | INFO  | Task b994309c-c798-418b-a4f7-87a36b92ec01 is in state STARTED 2026-02-28 01:04:22.088338 | orchestrator | 2026-02-28 01:04:22 | INFO  | Task 7412b3b9-a78a-42fe-89ac-55c2d9ab9531 is in state STARTED 2026-02-28 01:04:22.089499 | orchestrator | 2026-02-28 01:04:22 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:04:22.089557 | orchestrator | 2026-02-28 01:04:22 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:04:25.122763 | orchestrator | 2026-02-28 01:04:25 | INFO  | Task e07a3aae-281f-4024-8b8b-21e92e17139a is in state STARTED 2026-02-28 01:04:25.124361 | orchestrator | 2026-02-28 01:04:25 | INFO  | Task c9bc6f62-00b4-4207-b064-516647d36890 is in state STARTED 2026-02-28 01:04:25.124418 | orchestrator | 2026-02-28 01:04:25 | INFO  | Task b994309c-c798-418b-a4f7-87a36b92ec01 is in state STARTED 2026-02-28 01:04:25.127291 | orchestrator | 2026-02-28 01:04:25 | INFO  | Task 7412b3b9-a78a-42fe-89ac-55c2d9ab9531 is in state STARTED 2026-02-28 01:04:25.127340 | orchestrator | 2026-02-28 01:04:25 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:04:25.127348 | orchestrator | 2026-02-28 01:04:25 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:04:28.171548 | orchestrator | 2026-02-28 01:04:28 | INFO  | Task e07a3aae-281f-4024-8b8b-21e92e17139a is in state STARTED 2026-02-28 01:04:28.171960 | orchestrator | 2026-02-28 01:04:28 | INFO  | Task c9bc6f62-00b4-4207-b064-516647d36890 is in state STARTED 2026-02-28 01:04:28.172923 | orchestrator | 2026-02-28 01:04:28 | INFO  | Task b994309c-c798-418b-a4f7-87a36b92ec01 is in state STARTED 2026-02-28 01:04:28.173982 | orchestrator | 2026-02-28 01:04:28 | INFO  | Task 7412b3b9-a78a-42fe-89ac-55c2d9ab9531 is in state STARTED 2026-02-28 01:04:28.174455 | orchestrator | 2026-02-28 01:04:28 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:04:28.174496 | orchestrator | 2026-02-28 01:04:28 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:04:31.220341 | orchestrator | 2026-02-28 01:04:31 | INFO  | Task e07a3aae-281f-4024-8b8b-21e92e17139a is in state STARTED 2026-02-28 01:04:31.220462 | orchestrator | 2026-02-28 01:04:31 | INFO  | Task c9bc6f62-00b4-4207-b064-516647d36890 is in state STARTED 2026-02-28 01:04:31.220924 | orchestrator | 2026-02-28 01:04:31 | INFO  | Task b994309c-c798-418b-a4f7-87a36b92ec01 is in state STARTED 2026-02-28 01:04:31.222546 | orchestrator | 2026-02-28 01:04:31 | INFO  | Task 7412b3b9-a78a-42fe-89ac-55c2d9ab9531 is in state STARTED 2026-02-28 01:04:31.223149 | orchestrator | 2026-02-28 01:04:31 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:04:31.223179 | orchestrator | 2026-02-28 01:04:31 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:04:34.265709 | orchestrator | 2026-02-28 01:04:34 | INFO  | Task e07a3aae-281f-4024-8b8b-21e92e17139a is in state STARTED 2026-02-28 01:04:34.267168 | orchestrator | 2026-02-28 01:04:34 | INFO  | Task c9bc6f62-00b4-4207-b064-516647d36890 is in state STARTED 2026-02-28 01:04:34.267805 | orchestrator | 2026-02-28 01:04:34 | INFO  | Task b994309c-c798-418b-a4f7-87a36b92ec01 is in state STARTED 2026-02-28 01:04:34.269972 | orchestrator | 2026-02-28 01:04:34 | INFO  | Task 7412b3b9-a78a-42fe-89ac-55c2d9ab9531 is in state STARTED 2026-02-28 01:04:34.270785 | orchestrator | 2026-02-28 01:04:34 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:04:34.270820 | orchestrator | 2026-02-28 01:04:34 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:04:37.302392 | orchestrator | 2026-02-28 01:04:37 | INFO  | Task e07a3aae-281f-4024-8b8b-21e92e17139a is in state STARTED 2026-02-28 01:04:37.303976 | orchestrator | 2026-02-28 01:04:37 | INFO  | Task c9bc6f62-00b4-4207-b064-516647d36890 is in state STARTED 2026-02-28 01:04:37.304593 | orchestrator | 2026-02-28 01:04:37 | INFO  | Task b994309c-c798-418b-a4f7-87a36b92ec01 is in state STARTED 2026-02-28 01:04:37.305627 | orchestrator | 2026-02-28 01:04:37 | INFO  | Task 7412b3b9-a78a-42fe-89ac-55c2d9ab9531 is in state STARTED 2026-02-28 01:04:37.307769 | orchestrator | 2026-02-28 01:04:37 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:04:37.307822 | orchestrator | 2026-02-28 01:04:37 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:04:40.345258 | orchestrator | 2026-02-28 01:04:40 | INFO  | Task e07a3aae-281f-4024-8b8b-21e92e17139a is in state STARTED 2026-02-28 01:04:40.347095 | orchestrator | 2026-02-28 01:04:40 | INFO  | Task c9bc6f62-00b4-4207-b064-516647d36890 is in state STARTED 2026-02-28 01:04:40.349634 | orchestrator | 2026-02-28 01:04:40 | INFO  | Task b994309c-c798-418b-a4f7-87a36b92ec01 is in state STARTED 2026-02-28 01:04:40.350713 | orchestrator | 2026-02-28 01:04:40 | INFO  | Task 7412b3b9-a78a-42fe-89ac-55c2d9ab9531 is in state STARTED 2026-02-28 01:04:40.351224 | orchestrator | 2026-02-28 01:04:40 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:04:40.351268 | orchestrator | 2026-02-28 01:04:40 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:04:43.394534 | orchestrator | 2026-02-28 01:04:43 | INFO  | Task e07a3aae-281f-4024-8b8b-21e92e17139a is in state STARTED 2026-02-28 01:04:43.395242 | orchestrator | 2026-02-28 01:04:43 | INFO  | Task c9bc6f62-00b4-4207-b064-516647d36890 is in state STARTED 2026-02-28 01:04:43.395534 | orchestrator | 2026-02-28 01:04:43 | INFO  | Task b994309c-c798-418b-a4f7-87a36b92ec01 is in state SUCCESS 2026-02-28 01:04:43.396056 | orchestrator | 2026-02-28 01:04:43.396089 | orchestrator | 2026-02-28 01:04:43.396101 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-28 01:04:43.396114 | orchestrator | 2026-02-28 01:04:43.396126 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-28 01:04:43.396138 | orchestrator | Saturday 28 February 2026 01:03:35 +0000 (0:00:00.353) 0:00:00.353 ***** 2026-02-28 01:04:43.396149 | orchestrator | ok: [testbed-manager] 2026-02-28 01:04:43.396162 | orchestrator | ok: [testbed-node-3] 2026-02-28 01:04:43.396174 | orchestrator | ok: [testbed-node-4] 2026-02-28 01:04:43.396185 | orchestrator | ok: [testbed-node-5] 2026-02-28 01:04:43.396196 | orchestrator | ok: [testbed-node-0] 2026-02-28 01:04:43.396208 | orchestrator | ok: [testbed-node-1] 2026-02-28 01:04:43.396220 | orchestrator | ok: [testbed-node-2] 2026-02-28 01:04:43.396231 | orchestrator | 2026-02-28 01:04:43.396242 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-28 01:04:43.396254 | orchestrator | Saturday 28 February 2026 01:03:36 +0000 (0:00:01.345) 0:00:01.699 ***** 2026-02-28 01:04:43.396265 | orchestrator | ok: [testbed-manager] => (item=enable_ceph_rgw_True) 2026-02-28 01:04:43.396277 | orchestrator | ok: [testbed-node-3] => (item=enable_ceph_rgw_True) 2026-02-28 01:04:43.396289 | orchestrator | ok: [testbed-node-4] => (item=enable_ceph_rgw_True) 2026-02-28 01:04:43.396301 | orchestrator | ok: [testbed-node-5] => (item=enable_ceph_rgw_True) 2026-02-28 01:04:43.396312 | orchestrator | ok: [testbed-node-0] => (item=enable_ceph_rgw_True) 2026-02-28 01:04:43.396323 | orchestrator | ok: [testbed-node-1] => (item=enable_ceph_rgw_True) 2026-02-28 01:04:43.396334 | orchestrator | ok: [testbed-node-2] => (item=enable_ceph_rgw_True) 2026-02-28 01:04:43.396345 | orchestrator | 2026-02-28 01:04:43.396356 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2026-02-28 01:04:43.396367 | orchestrator | 2026-02-28 01:04:43.396378 | orchestrator | TASK [ceph-rgw : include_tasks] ************************************************ 2026-02-28 01:04:43.396418 | orchestrator | Saturday 28 February 2026 01:03:37 +0000 (0:00:01.278) 0:00:02.978 ***** 2026-02-28 01:04:43.396432 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/deploy.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 01:04:43.396465 | orchestrator | 2026-02-28 01:04:43.396489 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating/deleting services] ************* 2026-02-28 01:04:43.396501 | orchestrator | Saturday 28 February 2026 01:03:39 +0000 (0:00:01.634) 0:00:04.612 ***** 2026-02-28 01:04:43.396513 | orchestrator | changed: [testbed-manager] => (item=swift (object-store)) 2026-02-28 01:04:43.396524 | orchestrator | 2026-02-28 01:04:43.396535 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating/deleting endpoints] ************ 2026-02-28 01:04:43.396547 | orchestrator | Saturday 28 February 2026 01:03:44 +0000 (0:00:05.282) 0:00:09.894 ***** 2026-02-28 01:04:43.396572 | orchestrator | changed: [testbed-manager] => (item=swift -> https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> internal) 2026-02-28 01:04:43.396587 | orchestrator | changed: [testbed-manager] => (item=swift -> https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> public) 2026-02-28 01:04:43.396598 | orchestrator | 2026-02-28 01:04:43.396610 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating projects] ********************** 2026-02-28 01:04:43.396621 | orchestrator | Saturday 28 February 2026 01:03:51 +0000 (0:00:06.884) 0:00:16.779 ***** 2026-02-28 01:04:43.396633 | orchestrator | ok: [testbed-manager] => (item=service) 2026-02-28 01:04:43.396645 | orchestrator | 2026-02-28 01:04:43.396656 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating users] ************************* 2026-02-28 01:04:43.396688 | orchestrator | Saturday 28 February 2026 01:03:55 +0000 (0:00:03.405) 0:00:20.185 ***** 2026-02-28 01:04:43.396701 | orchestrator | changed: [testbed-manager] => (item=ceph_rgw -> service) 2026-02-28 01:04:43.396714 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-28 01:04:43.396727 | orchestrator | 2026-02-28 01:04:43.396740 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating roles] ************************* 2026-02-28 01:04:43.396753 | orchestrator | Saturday 28 February 2026 01:03:59 +0000 (0:00:04.260) 0:00:24.446 ***** 2026-02-28 01:04:43.396766 | orchestrator | ok: [testbed-manager] => (item=admin) 2026-02-28 01:04:43.396779 | orchestrator | changed: [testbed-manager] => (item=ResellerAdmin) 2026-02-28 01:04:43.396791 | orchestrator | 2026-02-28 01:04:43.396804 | orchestrator | TASK [service-ks-register : ceph-rgw | Granting/revoking user roles] *********** 2026-02-28 01:04:43.396817 | orchestrator | Saturday 28 February 2026 01:04:06 +0000 (0:00:07.171) 0:00:31.618 ***** 2026-02-28 01:04:43.396831 | orchestrator | changed: [testbed-manager] => (item=ceph_rgw -> service -> admin) 2026-02-28 01:04:43.396845 | orchestrator | 2026-02-28 01:04:43.396858 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-28 01:04:43.396872 | orchestrator | testbed-manager : ok=9  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-28 01:04:43.396887 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-28 01:04:43.396899 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-28 01:04:43.396912 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-28 01:04:43.396925 | orchestrator | testbed-node-3 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-28 01:04:43.396953 | orchestrator | testbed-node-4 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-28 01:04:43.396967 | orchestrator | testbed-node-5 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-28 01:04:43.396989 | orchestrator | 2026-02-28 01:04:43.397003 | orchestrator | 2026-02-28 01:04:43.397015 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-28 01:04:43.397029 | orchestrator | Saturday 28 February 2026 01:04:11 +0000 (0:00:04.974) 0:00:36.592 ***** 2026-02-28 01:04:43.397043 | orchestrator | =============================================================================== 2026-02-28 01:04:43.397073 | orchestrator | service-ks-register : ceph-rgw | Creating roles ------------------------- 7.17s 2026-02-28 01:04:43.397086 | orchestrator | service-ks-register : ceph-rgw | Creating/deleting endpoints ------------ 6.88s 2026-02-28 01:04:43.397098 | orchestrator | service-ks-register : ceph-rgw | Creating/deleting services ------------- 5.28s 2026-02-28 01:04:43.397120 | orchestrator | service-ks-register : ceph-rgw | Granting/revoking user roles ----------- 4.97s 2026-02-28 01:04:43.397132 | orchestrator | service-ks-register : ceph-rgw | Creating users ------------------------- 4.26s 2026-02-28 01:04:43.397143 | orchestrator | service-ks-register : ceph-rgw | Creating projects ---------------------- 3.41s 2026-02-28 01:04:43.397155 | orchestrator | ceph-rgw : include_tasks ------------------------------------------------ 1.63s 2026-02-28 01:04:43.397167 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.35s 2026-02-28 01:04:43.397179 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.28s 2026-02-28 01:04:43.397191 | orchestrator | 2026-02-28 01:04:43.397202 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-02-28 01:04:43.397214 | orchestrator | 2.16.14 2026-02-28 01:04:43.397226 | orchestrator | 2026-02-28 01:04:43.397238 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2026-02-28 01:04:43.397249 | orchestrator | 2026-02-28 01:04:43.397260 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2026-02-28 01:04:43.397271 | orchestrator | Saturday 28 February 2026 01:03:16 +0000 (0:00:00.290) 0:00:00.290 ***** 2026-02-28 01:04:43.397282 | orchestrator | changed: [testbed-manager] 2026-02-28 01:04:43.397294 | orchestrator | 2026-02-28 01:04:43.397305 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2026-02-28 01:04:43.397316 | orchestrator | Saturday 28 February 2026 01:03:17 +0000 (0:00:01.480) 0:00:01.770 ***** 2026-02-28 01:04:43.397328 | orchestrator | changed: [testbed-manager] 2026-02-28 01:04:43.397339 | orchestrator | 2026-02-28 01:04:43.397351 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2026-02-28 01:04:43.397362 | orchestrator | Saturday 28 February 2026 01:03:19 +0000 (0:00:01.344) 0:00:03.114 ***** 2026-02-28 01:04:43.397381 | orchestrator | changed: [testbed-manager] 2026-02-28 01:04:43.397393 | orchestrator | 2026-02-28 01:04:43.397405 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2026-02-28 01:04:43.397417 | orchestrator | Saturday 28 February 2026 01:03:20 +0000 (0:00:01.284) 0:00:04.399 ***** 2026-02-28 01:04:43.397428 | orchestrator | changed: [testbed-manager] 2026-02-28 01:04:43.397440 | orchestrator | 2026-02-28 01:04:43.397452 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2026-02-28 01:04:43.397464 | orchestrator | Saturday 28 February 2026 01:03:21 +0000 (0:00:01.360) 0:00:05.760 ***** 2026-02-28 01:04:43.397475 | orchestrator | changed: [testbed-manager] 2026-02-28 01:04:43.397486 | orchestrator | 2026-02-28 01:04:43.397498 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2026-02-28 01:04:43.397510 | orchestrator | Saturday 28 February 2026 01:03:23 +0000 (0:00:01.341) 0:00:07.101 ***** 2026-02-28 01:04:43.397521 | orchestrator | changed: [testbed-manager] 2026-02-28 01:04:43.397533 | orchestrator | 2026-02-28 01:04:43.397544 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2026-02-28 01:04:43.397556 | orchestrator | Saturday 28 February 2026 01:03:24 +0000 (0:00:01.274) 0:00:08.376 ***** 2026-02-28 01:04:43.397567 | orchestrator | changed: [testbed-manager] 2026-02-28 01:04:43.397578 | orchestrator | 2026-02-28 01:04:43.397589 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2026-02-28 01:04:43.397608 | orchestrator | Saturday 28 February 2026 01:03:26 +0000 (0:00:02.018) 0:00:10.395 ***** 2026-02-28 01:04:43.397620 | orchestrator | changed: [testbed-manager] 2026-02-28 01:04:43.397631 | orchestrator | 2026-02-28 01:04:43.397642 | orchestrator | TASK [Create admin user] ******************************************************* 2026-02-28 01:04:43.397654 | orchestrator | Saturday 28 February 2026 01:03:27 +0000 (0:00:01.523) 0:00:11.919 ***** 2026-02-28 01:04:43.397730 | orchestrator | changed: [testbed-manager] 2026-02-28 01:04:43.397744 | orchestrator | 2026-02-28 01:04:43.397756 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2026-02-28 01:04:43.397767 | orchestrator | Saturday 28 February 2026 01:04:17 +0000 (0:00:49.821) 0:01:01.740 ***** 2026-02-28 01:04:43.397778 | orchestrator | skipping: [testbed-manager] 2026-02-28 01:04:43.397900 | orchestrator | 2026-02-28 01:04:43.397915 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-02-28 01:04:43.397926 | orchestrator | 2026-02-28 01:04:43.397937 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-02-28 01:04:43.397949 | orchestrator | Saturday 28 February 2026 01:04:17 +0000 (0:00:00.178) 0:01:01.919 ***** 2026-02-28 01:04:43.397961 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:04:43.397979 | orchestrator | 2026-02-28 01:04:43.397998 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-02-28 01:04:43.398082 | orchestrator | 2026-02-28 01:04:43.398102 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-02-28 01:04:43.398114 | orchestrator | Saturday 28 February 2026 01:04:29 +0000 (0:00:11.880) 0:01:13.799 ***** 2026-02-28 01:04:43.398125 | orchestrator | changed: [testbed-node-1] 2026-02-28 01:04:43.398136 | orchestrator | 2026-02-28 01:04:43.398147 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-02-28 01:04:43.398158 | orchestrator | 2026-02-28 01:04:43.398169 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-02-28 01:04:43.398193 | orchestrator | Saturday 28 February 2026 01:04:31 +0000 (0:00:01.326) 0:01:15.126 ***** 2026-02-28 01:04:43.398205 | orchestrator | changed: [testbed-node-2] 2026-02-28 01:04:43.398216 | orchestrator | 2026-02-28 01:04:43.398227 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-28 01:04:43.398238 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-28 01:04:43.398250 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-28 01:04:43.398261 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-28 01:04:43.398271 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-28 01:04:43.398281 | orchestrator | 2026-02-28 01:04:43.398291 | orchestrator | 2026-02-28 01:04:43.398301 | orchestrator | 2026-02-28 01:04:43.398310 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-28 01:04:43.398320 | orchestrator | Saturday 28 February 2026 01:04:42 +0000 (0:00:11.396) 0:01:26.523 ***** 2026-02-28 01:04:43.398330 | orchestrator | =============================================================================== 2026-02-28 01:04:43.398340 | orchestrator | Create admin user ------------------------------------------------------ 49.82s 2026-02-28 01:04:43.398350 | orchestrator | Restart ceph manager service ------------------------------------------- 24.60s 2026-02-28 01:04:43.398360 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 2.02s 2026-02-28 01:04:43.398370 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 1.52s 2026-02-28 01:04:43.398379 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 1.48s 2026-02-28 01:04:43.398389 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 1.36s 2026-02-28 01:04:43.398411 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 1.34s 2026-02-28 01:04:43.398420 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 1.34s 2026-02-28 01:04:43.398430 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 1.28s 2026-02-28 01:04:43.398440 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 1.27s 2026-02-28 01:04:43.398450 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 0.18s 2026-02-28 01:04:43.398467 | orchestrator | 2026-02-28 01:04:43 | INFO  | Task 7412b3b9-a78a-42fe-89ac-55c2d9ab9531 is in state STARTED 2026-02-28 01:04:43.399296 | orchestrator | 2026-02-28 01:04:43 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:04:43.399326 | orchestrator | 2026-02-28 01:04:43 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:04:46.472433 | orchestrator | 2026-02-28 01:04:46 | INFO  | Task e07a3aae-281f-4024-8b8b-21e92e17139a is in state STARTED 2026-02-28 01:04:46.474599 | orchestrator | 2026-02-28 01:04:46 | INFO  | Task c9bc6f62-00b4-4207-b064-516647d36890 is in state STARTED 2026-02-28 01:04:46.476607 | orchestrator | 2026-02-28 01:04:46 | INFO  | Task 7412b3b9-a78a-42fe-89ac-55c2d9ab9531 is in state STARTED 2026-02-28 01:04:46.478589 | orchestrator | 2026-02-28 01:04:46 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:04:46.478648 | orchestrator | 2026-02-28 01:04:46 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:04:49.527179 | orchestrator | 2026-02-28 01:04:49 | INFO  | Task e07a3aae-281f-4024-8b8b-21e92e17139a is in state STARTED 2026-02-28 01:04:49.527290 | orchestrator | 2026-02-28 01:04:49 | INFO  | Task c9bc6f62-00b4-4207-b064-516647d36890 is in state STARTED 2026-02-28 01:04:49.529280 | orchestrator | 2026-02-28 01:04:49 | INFO  | Task 7412b3b9-a78a-42fe-89ac-55c2d9ab9531 is in state STARTED 2026-02-28 01:04:49.529881 | orchestrator | 2026-02-28 01:04:49 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:04:49.530234 | orchestrator | 2026-02-28 01:04:49 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:04:52.573704 | orchestrator | 2026-02-28 01:04:52 | INFO  | Task e07a3aae-281f-4024-8b8b-21e92e17139a is in state STARTED 2026-02-28 01:04:52.575444 | orchestrator | 2026-02-28 01:04:52 | INFO  | Task c9bc6f62-00b4-4207-b064-516647d36890 is in state STARTED 2026-02-28 01:04:52.576448 | orchestrator | 2026-02-28 01:04:52 | INFO  | Task 7412b3b9-a78a-42fe-89ac-55c2d9ab9531 is in state STARTED 2026-02-28 01:04:52.578494 | orchestrator | 2026-02-28 01:04:52 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:04:52.579633 | orchestrator | 2026-02-28 01:04:52 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:04:55.654840 | orchestrator | 2026-02-28 01:04:55 | INFO  | Task e07a3aae-281f-4024-8b8b-21e92e17139a is in state STARTED 2026-02-28 01:04:55.655710 | orchestrator | 2026-02-28 01:04:55 | INFO  | Task c9bc6f62-00b4-4207-b064-516647d36890 is in state STARTED 2026-02-28 01:04:55.657246 | orchestrator | 2026-02-28 01:04:55 | INFO  | Task 7412b3b9-a78a-42fe-89ac-55c2d9ab9531 is in state STARTED 2026-02-28 01:04:55.660736 | orchestrator | 2026-02-28 01:04:55 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:04:55.660806 | orchestrator | 2026-02-28 01:04:55 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:04:58.751833 | orchestrator | 2026-02-28 01:04:58 | INFO  | Task e07a3aae-281f-4024-8b8b-21e92e17139a is in state STARTED 2026-02-28 01:04:58.752044 | orchestrator | 2026-02-28 01:04:58 | INFO  | Task c9bc6f62-00b4-4207-b064-516647d36890 is in state STARTED 2026-02-28 01:04:58.753466 | orchestrator | 2026-02-28 01:04:58 | INFO  | Task 7412b3b9-a78a-42fe-89ac-55c2d9ab9531 is in state STARTED 2026-02-28 01:04:58.754309 | orchestrator | 2026-02-28 01:04:58 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:04:58.754336 | orchestrator | 2026-02-28 01:04:58 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:05:01.779631 | orchestrator | 2026-02-28 01:05:01 | INFO  | Task e07a3aae-281f-4024-8b8b-21e92e17139a is in state STARTED 2026-02-28 01:05:01.779951 | orchestrator | 2026-02-28 01:05:01 | INFO  | Task c9bc6f62-00b4-4207-b064-516647d36890 is in state STARTED 2026-02-28 01:05:01.780916 | orchestrator | 2026-02-28 01:05:01 | INFO  | Task 7412b3b9-a78a-42fe-89ac-55c2d9ab9531 is in state STARTED 2026-02-28 01:05:01.782501 | orchestrator | 2026-02-28 01:05:01 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:05:01.782526 | orchestrator | 2026-02-28 01:05:01 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:05:04.811593 | orchestrator | 2026-02-28 01:05:04 | INFO  | Task e07a3aae-281f-4024-8b8b-21e92e17139a is in state STARTED 2026-02-28 01:05:04.812121 | orchestrator | 2026-02-28 01:05:04 | INFO  | Task c9bc6f62-00b4-4207-b064-516647d36890 is in state STARTED 2026-02-28 01:05:04.812979 | orchestrator | 2026-02-28 01:05:04 | INFO  | Task 7412b3b9-a78a-42fe-89ac-55c2d9ab9531 is in state STARTED 2026-02-28 01:05:04.813840 | orchestrator | 2026-02-28 01:05:04 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:05:04.813866 | orchestrator | 2026-02-28 01:05:04 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:05:07.839346 | orchestrator | 2026-02-28 01:05:07 | INFO  | Task e07a3aae-281f-4024-8b8b-21e92e17139a is in state STARTED 2026-02-28 01:05:07.841139 | orchestrator | 2026-02-28 01:05:07 | INFO  | Task c9bc6f62-00b4-4207-b064-516647d36890 is in state STARTED 2026-02-28 01:05:07.841964 | orchestrator | 2026-02-28 01:05:07 | INFO  | Task 7412b3b9-a78a-42fe-89ac-55c2d9ab9531 is in state STARTED 2026-02-28 01:05:07.843768 | orchestrator | 2026-02-28 01:05:07 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:05:07.843829 | orchestrator | 2026-02-28 01:05:07 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:05:10.931609 | orchestrator | 2026-02-28 01:05:10 | INFO  | Task e07a3aae-281f-4024-8b8b-21e92e17139a is in state STARTED 2026-02-28 01:05:10.933100 | orchestrator | 2026-02-28 01:05:10 | INFO  | Task c9bc6f62-00b4-4207-b064-516647d36890 is in state STARTED 2026-02-28 01:05:10.935283 | orchestrator | 2026-02-28 01:05:10 | INFO  | Task 7412b3b9-a78a-42fe-89ac-55c2d9ab9531 is in state STARTED 2026-02-28 01:05:10.936296 | orchestrator | 2026-02-28 01:05:10 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:05:10.937118 | orchestrator | 2026-02-28 01:05:10 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:05:13.981854 | orchestrator | 2026-02-28 01:05:13 | INFO  | Task e07a3aae-281f-4024-8b8b-21e92e17139a is in state STARTED 2026-02-28 01:05:13.986295 | orchestrator | 2026-02-28 01:05:13 | INFO  | Task c9bc6f62-00b4-4207-b064-516647d36890 is in state STARTED 2026-02-28 01:05:13.987427 | orchestrator | 2026-02-28 01:05:13 | INFO  | Task 7412b3b9-a78a-42fe-89ac-55c2d9ab9531 is in state STARTED 2026-02-28 01:05:13.988463 | orchestrator | 2026-02-28 01:05:13 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:05:13.988505 | orchestrator | 2026-02-28 01:05:13 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:05:17.030509 | orchestrator | 2026-02-28 01:05:17 | INFO  | Task e07a3aae-281f-4024-8b8b-21e92e17139a is in state STARTED 2026-02-28 01:05:17.031584 | orchestrator | 2026-02-28 01:05:17 | INFO  | Task c9bc6f62-00b4-4207-b064-516647d36890 is in state STARTED 2026-02-28 01:05:17.032845 | orchestrator | 2026-02-28 01:05:17 | INFO  | Task 7412b3b9-a78a-42fe-89ac-55c2d9ab9531 is in state STARTED 2026-02-28 01:05:17.034179 | orchestrator | 2026-02-28 01:05:17 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:05:17.034405 | orchestrator | 2026-02-28 01:05:17 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:05:20.086891 | orchestrator | 2026-02-28 01:05:20 | INFO  | Task e07a3aae-281f-4024-8b8b-21e92e17139a is in state STARTED 2026-02-28 01:05:20.087009 | orchestrator | 2026-02-28 01:05:20 | INFO  | Task c9bc6f62-00b4-4207-b064-516647d36890 is in state STARTED 2026-02-28 01:05:20.087852 | orchestrator | 2026-02-28 01:05:20 | INFO  | Task 7412b3b9-a78a-42fe-89ac-55c2d9ab9531 is in state STARTED 2026-02-28 01:05:20.090080 | orchestrator | 2026-02-28 01:05:20 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:05:20.090153 | orchestrator | 2026-02-28 01:05:20 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:05:23.131523 | orchestrator | 2026-02-28 01:05:23 | INFO  | Task e07a3aae-281f-4024-8b8b-21e92e17139a is in state STARTED 2026-02-28 01:05:23.155537 | orchestrator | 2026-02-28 01:05:23 | INFO  | Task c9bc6f62-00b4-4207-b064-516647d36890 is in state STARTED 2026-02-28 01:05:23.155630 | orchestrator | 2026-02-28 01:05:23 | INFO  | Task 7412b3b9-a78a-42fe-89ac-55c2d9ab9531 is in state STARTED 2026-02-28 01:05:23.155641 | orchestrator | 2026-02-28 01:05:23 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:05:23.155650 | orchestrator | 2026-02-28 01:05:23 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:05:26.262153 | orchestrator | 2026-02-28 01:05:26 | INFO  | Task e07a3aae-281f-4024-8b8b-21e92e17139a is in state STARTED 2026-02-28 01:05:26.263106 | orchestrator | 2026-02-28 01:05:26 | INFO  | Task c9bc6f62-00b4-4207-b064-516647d36890 is in state STARTED 2026-02-28 01:05:26.263801 | orchestrator | 2026-02-28 01:05:26 | INFO  | Task 7412b3b9-a78a-42fe-89ac-55c2d9ab9531 is in state STARTED 2026-02-28 01:05:26.264999 | orchestrator | 2026-02-28 01:05:26 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:05:26.265024 | orchestrator | 2026-02-28 01:05:26 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:05:29.304506 | orchestrator | 2026-02-28 01:05:29 | INFO  | Task e07a3aae-281f-4024-8b8b-21e92e17139a is in state STARTED 2026-02-28 01:05:29.304921 | orchestrator | 2026-02-28 01:05:29 | INFO  | Task c9bc6f62-00b4-4207-b064-516647d36890 is in state STARTED 2026-02-28 01:05:29.305691 | orchestrator | 2026-02-28 01:05:29 | INFO  | Task 7412b3b9-a78a-42fe-89ac-55c2d9ab9531 is in state STARTED 2026-02-28 01:05:29.306514 | orchestrator | 2026-02-28 01:05:29 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:05:29.306541 | orchestrator | 2026-02-28 01:05:29 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:05:32.340682 | orchestrator | 2026-02-28 01:05:32 | INFO  | Task e07a3aae-281f-4024-8b8b-21e92e17139a is in state STARTED 2026-02-28 01:05:32.341271 | orchestrator | 2026-02-28 01:05:32 | INFO  | Task c9bc6f62-00b4-4207-b064-516647d36890 is in state STARTED 2026-02-28 01:05:32.341961 | orchestrator | 2026-02-28 01:05:32 | INFO  | Task 7412b3b9-a78a-42fe-89ac-55c2d9ab9531 is in state STARTED 2026-02-28 01:05:32.342649 | orchestrator | 2026-02-28 01:05:32 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:05:32.342678 | orchestrator | 2026-02-28 01:05:32 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:05:35.386005 | orchestrator | 2026-02-28 01:05:35 | INFO  | Task e07a3aae-281f-4024-8b8b-21e92e17139a is in state STARTED 2026-02-28 01:05:35.388936 | orchestrator | 2026-02-28 01:05:35 | INFO  | Task c9bc6f62-00b4-4207-b064-516647d36890 is in state STARTED 2026-02-28 01:05:35.391400 | orchestrator | 2026-02-28 01:05:35 | INFO  | Task 7412b3b9-a78a-42fe-89ac-55c2d9ab9531 is in state STARTED 2026-02-28 01:05:35.409780 | orchestrator | 2026-02-28 01:05:35 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:05:35.409890 | orchestrator | 2026-02-28 01:05:35 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:05:38.443770 | orchestrator | 2026-02-28 01:05:38 | INFO  | Task e07a3aae-281f-4024-8b8b-21e92e17139a is in state STARTED 2026-02-28 01:05:38.444333 | orchestrator | 2026-02-28 01:05:38 | INFO  | Task c9bc6f62-00b4-4207-b064-516647d36890 is in state STARTED 2026-02-28 01:05:38.445404 | orchestrator | 2026-02-28 01:05:38 | INFO  | Task 7412b3b9-a78a-42fe-89ac-55c2d9ab9531 is in state STARTED 2026-02-28 01:05:38.446635 | orchestrator | 2026-02-28 01:05:38 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:05:38.446669 | orchestrator | 2026-02-28 01:05:38 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:05:41.520346 | orchestrator | 2026-02-28 01:05:41 | INFO  | Task e07a3aae-281f-4024-8b8b-21e92e17139a is in state STARTED 2026-02-28 01:05:41.521025 | orchestrator | 2026-02-28 01:05:41 | INFO  | Task c9bc6f62-00b4-4207-b064-516647d36890 is in state STARTED 2026-02-28 01:05:41.522006 | orchestrator | 2026-02-28 01:05:41 | INFO  | Task 7412b3b9-a78a-42fe-89ac-55c2d9ab9531 is in state STARTED 2026-02-28 01:05:41.522934 | orchestrator | 2026-02-28 01:05:41 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:05:41.522985 | orchestrator | 2026-02-28 01:05:41 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:05:44.573946 | orchestrator | 2026-02-28 01:05:44 | INFO  | Task e07a3aae-281f-4024-8b8b-21e92e17139a is in state STARTED 2026-02-28 01:05:44.574405 | orchestrator | 2026-02-28 01:05:44 | INFO  | Task c9bc6f62-00b4-4207-b064-516647d36890 is in state STARTED 2026-02-28 01:05:44.575386 | orchestrator | 2026-02-28 01:05:44 | INFO  | Task 7412b3b9-a78a-42fe-89ac-55c2d9ab9531 is in state STARTED 2026-02-28 01:05:44.577118 | orchestrator | 2026-02-28 01:05:44 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:05:44.577246 | orchestrator | 2026-02-28 01:05:44 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:05:47.616084 | orchestrator | 2026-02-28 01:05:47 | INFO  | Task e07a3aae-281f-4024-8b8b-21e92e17139a is in state STARTED 2026-02-28 01:05:47.618385 | orchestrator | 2026-02-28 01:05:47 | INFO  | Task c9bc6f62-00b4-4207-b064-516647d36890 is in state STARTED 2026-02-28 01:05:47.622324 | orchestrator | 2026-02-28 01:05:47 | INFO  | Task 7412b3b9-a78a-42fe-89ac-55c2d9ab9531 is in state STARTED 2026-02-28 01:05:47.622421 | orchestrator | 2026-02-28 01:05:47 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:05:47.622435 | orchestrator | 2026-02-28 01:05:47 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:05:50.663627 | orchestrator | 2026-02-28 01:05:50 | INFO  | Task e07a3aae-281f-4024-8b8b-21e92e17139a is in state STARTED 2026-02-28 01:05:50.666158 | orchestrator | 2026-02-28 01:05:50 | INFO  | Task c9bc6f62-00b4-4207-b064-516647d36890 is in state STARTED 2026-02-28 01:05:50.670461 | orchestrator | 2026-02-28 01:05:50 | INFO  | Task 7412b3b9-a78a-42fe-89ac-55c2d9ab9531 is in state STARTED 2026-02-28 01:05:50.673028 | orchestrator | 2026-02-28 01:05:50 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:05:50.674182 | orchestrator | 2026-02-28 01:05:50 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:05:53.730825 | orchestrator | 2026-02-28 01:05:53 | INFO  | Task e07a3aae-281f-4024-8b8b-21e92e17139a is in state STARTED 2026-02-28 01:05:53.733455 | orchestrator | 2026-02-28 01:05:53 | INFO  | Task c9bc6f62-00b4-4207-b064-516647d36890 is in state STARTED 2026-02-28 01:05:53.736814 | orchestrator | 2026-02-28 01:05:53 | INFO  | Task 7412b3b9-a78a-42fe-89ac-55c2d9ab9531 is in state STARTED 2026-02-28 01:05:53.739418 | orchestrator | 2026-02-28 01:05:53 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:05:53.739465 | orchestrator | 2026-02-28 01:05:53 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:05:56.778462 | orchestrator | 2026-02-28 01:05:56 | INFO  | Task e07a3aae-281f-4024-8b8b-21e92e17139a is in state STARTED 2026-02-28 01:05:56.782327 | orchestrator | 2026-02-28 01:05:56 | INFO  | Task c9bc6f62-00b4-4207-b064-516647d36890 is in state STARTED 2026-02-28 01:05:56.786264 | orchestrator | 2026-02-28 01:05:56 | INFO  | Task 7412b3b9-a78a-42fe-89ac-55c2d9ab9531 is in state STARTED 2026-02-28 01:05:56.789161 | orchestrator | 2026-02-28 01:05:56 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:05:56.789933 | orchestrator | 2026-02-28 01:05:56 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:05:59.837640 | orchestrator | 2026-02-28 01:05:59 | INFO  | Task e07a3aae-281f-4024-8b8b-21e92e17139a is in state STARTED 2026-02-28 01:05:59.841638 | orchestrator | 2026-02-28 01:05:59 | INFO  | Task c9bc6f62-00b4-4207-b064-516647d36890 is in state STARTED 2026-02-28 01:05:59.846920 | orchestrator | 2026-02-28 01:05:59 | INFO  | Task 7412b3b9-a78a-42fe-89ac-55c2d9ab9531 is in state STARTED 2026-02-28 01:05:59.852942 | orchestrator | 2026-02-28 01:05:59 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:05:59.854187 | orchestrator | 2026-02-28 01:05:59 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:06:02.898499 | orchestrator | 2026-02-28 01:06:02 | INFO  | Task e07a3aae-281f-4024-8b8b-21e92e17139a is in state STARTED 2026-02-28 01:06:02.900300 | orchestrator | 2026-02-28 01:06:02 | INFO  | Task c9bc6f62-00b4-4207-b064-516647d36890 is in state STARTED 2026-02-28 01:06:02.902356 | orchestrator | 2026-02-28 01:06:02 | INFO  | Task 7412b3b9-a78a-42fe-89ac-55c2d9ab9531 is in state STARTED 2026-02-28 01:06:02.904448 | orchestrator | 2026-02-28 01:06:02 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:06:02.904585 | orchestrator | 2026-02-28 01:06:02 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:06:05.988784 | orchestrator | 2026-02-28 01:06:05 | INFO  | Task e07a3aae-281f-4024-8b8b-21e92e17139a is in state STARTED 2026-02-28 01:06:05.988880 | orchestrator | 2026-02-28 01:06:05 | INFO  | Task c9bc6f62-00b4-4207-b064-516647d36890 is in state STARTED 2026-02-28 01:06:05.988891 | orchestrator | 2026-02-28 01:06:05 | INFO  | Task 7412b3b9-a78a-42fe-89ac-55c2d9ab9531 is in state STARTED 2026-02-28 01:06:05.988897 | orchestrator | 2026-02-28 01:06:05 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:06:05.988903 | orchestrator | 2026-02-28 01:06:05 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:06:09.013795 | orchestrator | 2026-02-28 01:06:09 | INFO  | Task e07a3aae-281f-4024-8b8b-21e92e17139a is in state STARTED 2026-02-28 01:06:09.015270 | orchestrator | 2026-02-28 01:06:09 | INFO  | Task c9bc6f62-00b4-4207-b064-516647d36890 is in state STARTED 2026-02-28 01:06:09.018506 | orchestrator | 2026-02-28 01:06:09 | INFO  | Task 7412b3b9-a78a-42fe-89ac-55c2d9ab9531 is in state STARTED 2026-02-28 01:06:09.020083 | orchestrator | 2026-02-28 01:06:09 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:06:09.020269 | orchestrator | 2026-02-28 01:06:09 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:06:12.068498 | orchestrator | 2026-02-28 01:06:12 | INFO  | Task e07a3aae-281f-4024-8b8b-21e92e17139a is in state STARTED 2026-02-28 01:06:12.069545 | orchestrator | 2026-02-28 01:06:12 | INFO  | Task c9bc6f62-00b4-4207-b064-516647d36890 is in state STARTED 2026-02-28 01:06:12.070907 | orchestrator | 2026-02-28 01:06:12 | INFO  | Task 7412b3b9-a78a-42fe-89ac-55c2d9ab9531 is in state STARTED 2026-02-28 01:06:12.072251 | orchestrator | 2026-02-28 01:06:12 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:06:12.072290 | orchestrator | 2026-02-28 01:06:12 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:06:15.118984 | orchestrator | 2026-02-28 01:06:15 | INFO  | Task e07a3aae-281f-4024-8b8b-21e92e17139a is in state STARTED 2026-02-28 01:06:15.122528 | orchestrator | 2026-02-28 01:06:15 | INFO  | Task c9bc6f62-00b4-4207-b064-516647d36890 is in state STARTED 2026-02-28 01:06:15.124961 | orchestrator | 2026-02-28 01:06:15 | INFO  | Task 7412b3b9-a78a-42fe-89ac-55c2d9ab9531 is in state STARTED 2026-02-28 01:06:15.127105 | orchestrator | 2026-02-28 01:06:15 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:06:15.127155 | orchestrator | 2026-02-28 01:06:15 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:06:18.184241 | orchestrator | 2026-02-28 01:06:18 | INFO  | Task e07a3aae-281f-4024-8b8b-21e92e17139a is in state STARTED 2026-02-28 01:06:18.185857 | orchestrator | 2026-02-28 01:06:18 | INFO  | Task c9bc6f62-00b4-4207-b064-516647d36890 is in state STARTED 2026-02-28 01:06:18.187316 | orchestrator | 2026-02-28 01:06:18 | INFO  | Task 7412b3b9-a78a-42fe-89ac-55c2d9ab9531 is in state STARTED 2026-02-28 01:06:18.191009 | orchestrator | 2026-02-28 01:06:18 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:06:18.191069 | orchestrator | 2026-02-28 01:06:18 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:06:21.241927 | orchestrator | 2026-02-28 01:06:21 | INFO  | Task e07a3aae-281f-4024-8b8b-21e92e17139a is in state STARTED 2026-02-28 01:06:21.242116 | orchestrator | 2026-02-28 01:06:21 | INFO  | Task c9bc6f62-00b4-4207-b064-516647d36890 is in state STARTED 2026-02-28 01:06:21.242137 | orchestrator | 2026-02-28 01:06:21 | INFO  | Task 7412b3b9-a78a-42fe-89ac-55c2d9ab9531 is in state STARTED 2026-02-28 01:06:21.242149 | orchestrator | 2026-02-28 01:06:21 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:06:21.242160 | orchestrator | 2026-02-28 01:06:21 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:06:24.279992 | orchestrator | 2026-02-28 01:06:24 | INFO  | Task e07a3aae-281f-4024-8b8b-21e92e17139a is in state STARTED 2026-02-28 01:06:24.281915 | orchestrator | 2026-02-28 01:06:24 | INFO  | Task c9bc6f62-00b4-4207-b064-516647d36890 is in state STARTED 2026-02-28 01:06:24.283544 | orchestrator | 2026-02-28 01:06:24 | INFO  | Task 7412b3b9-a78a-42fe-89ac-55c2d9ab9531 is in state STARTED 2026-02-28 01:06:24.286121 | orchestrator | 2026-02-28 01:06:24 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:06:24.286213 | orchestrator | 2026-02-28 01:06:24 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:06:27.321540 | orchestrator | 2026-02-28 01:06:27 | INFO  | Task e07a3aae-281f-4024-8b8b-21e92e17139a is in state STARTED 2026-02-28 01:06:27.321641 | orchestrator | 2026-02-28 01:06:27 | INFO  | Task c9bc6f62-00b4-4207-b064-516647d36890 is in state STARTED 2026-02-28 01:06:27.322534 | orchestrator | 2026-02-28 01:06:27 | INFO  | Task 7412b3b9-a78a-42fe-89ac-55c2d9ab9531 is in state STARTED 2026-02-28 01:06:27.323565 | orchestrator | 2026-02-28 01:06:27 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:06:27.323599 | orchestrator | 2026-02-28 01:06:27 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:06:30.361645 | orchestrator | 2026-02-28 01:06:30 | INFO  | Task e07a3aae-281f-4024-8b8b-21e92e17139a is in state STARTED 2026-02-28 01:06:30.364290 | orchestrator | 2026-02-28 01:06:30 | INFO  | Task c9bc6f62-00b4-4207-b064-516647d36890 is in state STARTED 2026-02-28 01:06:30.365644 | orchestrator | 2026-02-28 01:06:30 | INFO  | Task 7412b3b9-a78a-42fe-89ac-55c2d9ab9531 is in state STARTED 2026-02-28 01:06:30.367577 | orchestrator | 2026-02-28 01:06:30 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:06:30.367959 | orchestrator | 2026-02-28 01:06:30 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:06:33.406074 | orchestrator | 2026-02-28 01:06:33 | INFO  | Task e07a3aae-281f-4024-8b8b-21e92e17139a is in state SUCCESS 2026-02-28 01:06:33.407561 | orchestrator | 2026-02-28 01:06:33.407605 | orchestrator | 2026-02-28 01:06:33.407613 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-28 01:06:33.407620 | orchestrator | 2026-02-28 01:06:33.407625 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-28 01:06:33.407643 | orchestrator | Saturday 28 February 2026 01:03:17 +0000 (0:00:00.323) 0:00:00.323 ***** 2026-02-28 01:06:33.407649 | orchestrator | ok: [testbed-manager] 2026-02-28 01:06:33.407663 | orchestrator | ok: [testbed-node-0] 2026-02-28 01:06:33.407669 | orchestrator | ok: [testbed-node-1] 2026-02-28 01:06:33.407674 | orchestrator | ok: [testbed-node-2] 2026-02-28 01:06:33.407680 | orchestrator | ok: [testbed-node-3] 2026-02-28 01:06:33.407685 | orchestrator | ok: [testbed-node-4] 2026-02-28 01:06:33.407723 | orchestrator | ok: [testbed-node-5] 2026-02-28 01:06:33.407737 | orchestrator | 2026-02-28 01:06:33.407743 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-28 01:06:33.407756 | orchestrator | Saturday 28 February 2026 01:03:18 +0000 (0:00:00.935) 0:00:01.259 ***** 2026-02-28 01:06:33.407762 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2026-02-28 01:06:33.407769 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2026-02-28 01:06:33.407774 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2026-02-28 01:06:33.407779 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2026-02-28 01:06:33.407785 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2026-02-28 01:06:33.407790 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2026-02-28 01:06:33.407795 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2026-02-28 01:06:33.407800 | orchestrator | 2026-02-28 01:06:33.407806 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2026-02-28 01:06:33.407811 | orchestrator | 2026-02-28 01:06:33.407816 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-02-28 01:06:33.407821 | orchestrator | Saturday 28 February 2026 01:03:19 +0000 (0:00:00.940) 0:00:02.199 ***** 2026-02-28 01:06:33.407828 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-28 01:06:33.407853 | orchestrator | 2026-02-28 01:06:33.407859 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2026-02-28 01:06:33.407864 | orchestrator | Saturday 28 February 2026 01:03:21 +0000 (0:00:02.085) 0:00:04.285 ***** 2026-02-28 01:06:33.407871 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-28 01:06:33.407882 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-server:2025.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-02-28 01:06:33.407889 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-28 01:06:33.407915 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 01:06:33.407922 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 01:06:33.407928 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-28 01:06:33.407939 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-28 01:06:33.407945 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-28 01:06:33.407950 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-28 01:06:33.407955 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 01:06:33.407962 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 01:06:33.407974 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-28 01:06:33.407981 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-28 01:06:33.407987 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-28 01:06:33.407997 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 01:06:33.408003 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-28 01:06:33.408008 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-28 01:06:33.408014 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-28 01:06:33.408019 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-28 01:06:33.408032 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2025.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-02-28 01:06:33.408043 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-28 01:06:33.408049 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 01:06:33.408055 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 01:06:33.408061 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-28 01:06:33.408066 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 01:06:33.408072 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-28 01:06:33.408085 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 01:06:33.408091 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-28 01:06:33.408100 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 01:06:33.408106 | orchestrator | 2026-02-28 01:06:33.408111 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-02-28 01:06:33.408116 | orchestrator | Saturday 28 February 2026 01:03:24 +0000 (0:00:03.838) 0:00:08.124 ***** 2026-02-28 01:06:33.408122 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-28 01:06:33.408127 | orchestrator | 2026-02-28 01:06:33.408133 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2026-02-28 01:06:33.408138 | orchestrator | Saturday 28 February 2026 01:03:26 +0000 (0:00:01.282) 0:00:09.406 ***** 2026-02-28 01:06:33.408143 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-28 01:06:33.408149 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-28 01:06:33.408157 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-server:2025.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-02-28 01:06:33.408168 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-28 01:06:33.408180 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-28 01:06:33.408187 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-28 01:06:33.408193 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-28 01:06:33.408199 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 01:06:33.408206 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 01:06:33.408212 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-28 01:06:33.408222 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-28 01:06:33.408232 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 01:06:33.408243 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-28 01:06:33.408249 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-28 01:06:33.408256 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 01:06:33.408262 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 01:06:33.408268 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-28 01:06:33.408274 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-28 01:06:33.408284 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 01:06:33.408302 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-28 01:06:33.408309 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-28 01:06:33.408315 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-28 01:06:33.408598 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2025.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-02-28 01:06:33.408613 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-28 01:06:33.408619 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-28 01:06:33.408639 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 01:06:33.408646 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 01:06:33.408651 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 01:06:33.408657 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 01:06:33.408662 | orchestrator | 2026-02-28 01:06:33.408668 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2026-02-28 01:06:33.408673 | orchestrator | Saturday 28 February 2026 01:03:32 +0000 (0:00:05.795) 0:00:15.202 ***** 2026-02-28 01:06:33.408679 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-server:2025.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-02-28 01:06:33.408684 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-28 01:06:33.408693 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-28 01:06:33.408725 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-28 01:06:33.408731 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 01:06:33.408737 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2025.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-02-28 01:06:33.408743 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 01:06:33.408748 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-28 01:06:33.408754 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-28 01:06:33.408766 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 01:06:33.408788 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 01:06:33.408793 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 01:06:33.408808 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:06:33.408814 | orchestrator | skipping: [testbed-manager] 2026-02-28 01:06:33.408827 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-28 01:06:33.408833 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-28 01:06:33.408838 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 01:06:33.408844 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-28 01:06:33.408853 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 01:06:33.408862 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-28 01:06:33.408868 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:06:33.408877 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-28 01:06:33.408882 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-28 01:06:33.408888 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 01:06:33.408893 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 01:06:33.408898 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:06:33.408904 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-28 01:06:33.408914 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-28 01:06:33.408919 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-28 01:06:33.408924 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:06:33.408935 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-28 01:06:33.408941 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 01:06:33.408947 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:06:33.408952 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-28 01:06:33.408957 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-28 01:06:33.408963 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:06:33.408968 | orchestrator | 2026-02-28 01:06:33.408973 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2026-02-28 01:06:33.408979 | orchestrator | Saturday 28 February 2026 01:03:35 +0000 (0:00:03.232) 0:00:18.434 ***** 2026-02-28 01:06:33.408984 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-server:2025.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-02-28 01:06:33.408994 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-28 01:06:33.409004 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-28 01:06:33.409011 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2025.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-02-28 01:06:33.409016 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-28 01:06:33.409022 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 01:06:33.409031 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 01:06:33.409037 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 01:06:33.409042 | orchestrator | skipping: [testbed-manager] 2026-02-28 01:06:33.409047 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-28 01:06:33.409059 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-28 01:06:33.409065 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 01:06:33.409070 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 01:06:33.409075 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-28 01:06:33.409081 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:06:33.409086 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 01:06:33.409097 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 01:06:33.409103 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 01:06:33.409108 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-28 01:06:33.409120 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-28 01:06:33.409126 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 01:06:33.409131 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:06:33.409137 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-28 01:06:33.409142 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 01:06:33.409151 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:06:33.409157 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-28 01:06:33.409162 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-28 01:06:33.409168 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-28 01:06:33.409173 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-28 01:06:33.409182 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:06:33.409877 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-28 01:06:33.409904 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:06:33.409910 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-28 01:06:33.409917 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-28 01:06:33.409937 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-28 01:06:33.409946 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:06:33.409954 | orchestrator | 2026-02-28 01:06:33.409963 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2026-02-28 01:06:33.409972 | orchestrator | Saturday 28 February 2026 01:03:38 +0000 (0:00:03.593) 0:00:22.028 ***** 2026-02-28 01:06:33.409981 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-28 01:06:33.409990 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-28 01:06:33.410049 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-server:2025.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-02-28 01:06:33.410084 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-28 01:06:33.410094 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-28 01:06:33.410109 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-28 01:06:33.410117 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-28 01:06:33.410126 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 01:06:33.410134 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-28 01:06:33.410143 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 01:06:33.410164 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-28 01:06:33.410173 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-28 01:06:33.410182 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 01:06:33.410196 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-28 01:06:33.410205 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-28 01:06:33.410213 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 01:06:33.410218 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 01:06:33.410226 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-28 01:06:33.410242 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-28 01:06:33.410251 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 01:06:33.410265 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-28 01:06:33.410273 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-28 01:06:33.410281 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2025.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-02-28 01:06:33.410289 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-28 01:06:33.410298 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-28 01:06:33.410316 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 01:06:33.410326 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 01:06:33.410341 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 01:06:33.410349 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 01:06:33.410357 | orchestrator | 2026-02-28 01:06:33.410365 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2026-02-28 01:06:33.410374 | orchestrator | Saturday 28 February 2026 01:03:46 +0000 (0:00:07.535) 0:00:29.563 ***** 2026-02-28 01:06:33.410622 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-28 01:06:33.410634 | orchestrator | 2026-02-28 01:06:33.410639 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2026-02-28 01:06:33.410645 | orchestrator | Saturday 28 February 2026 01:03:47 +0000 (0:00:01.191) 0:00:30.755 ***** 2026-02-28 01:06:33.410649 | orchestrator | skipping: [testbed-manager] 2026-02-28 01:06:33.410654 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:06:33.410659 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:06:33.410665 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:06:33.410669 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:06:33.410674 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:06:33.410679 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:06:33.410684 | orchestrator | 2026-02-28 01:06:33.410689 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2026-02-28 01:06:33.410694 | orchestrator | Saturday 28 February 2026 01:03:48 +0000 (0:00:00.719) 0:00:31.474 ***** 2026-02-28 01:06:33.410720 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-28 01:06:33.410725 | orchestrator | 2026-02-28 01:06:33.410730 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2026-02-28 01:06:33.410735 | orchestrator | Saturday 28 February 2026 01:03:49 +0000 (0:00:00.801) 0:00:32.276 ***** 2026-02-28 01:06:33.410740 | orchestrator | [WARNING]: Skipped 2026-02-28 01:06:33.410746 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-28 01:06:33.410751 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2026-02-28 01:06:33.410756 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-28 01:06:33.410761 | orchestrator | node-0/prometheus.yml.d' is not a directory 2026-02-28 01:06:33.410766 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-28 01:06:33.410771 | orchestrator | [WARNING]: Skipped 2026-02-28 01:06:33.410776 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-28 01:06:33.410781 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2026-02-28 01:06:33.410786 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-28 01:06:33.410799 | orchestrator | manager/prometheus.yml.d' is not a directory 2026-02-28 01:06:33.410804 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-28 01:06:33.410809 | orchestrator | [WARNING]: Skipped 2026-02-28 01:06:33.410813 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-28 01:06:33.410818 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2026-02-28 01:06:33.410823 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-28 01:06:33.410832 | orchestrator | node-2/prometheus.yml.d' is not a directory 2026-02-28 01:06:33.410837 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-02-28 01:06:33.410842 | orchestrator | [WARNING]: Skipped 2026-02-28 01:06:33.410852 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-28 01:06:33.410857 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2026-02-28 01:06:33.410862 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-28 01:06:33.410867 | orchestrator | node-1/prometheus.yml.d' is not a directory 2026-02-28 01:06:33.410872 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-02-28 01:06:33.410877 | orchestrator | [WARNING]: Skipped 2026-02-28 01:06:33.410882 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-28 01:06:33.410886 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2026-02-28 01:06:33.410892 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-28 01:06:33.410896 | orchestrator | node-4/prometheus.yml.d' is not a directory 2026-02-28 01:06:33.410901 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-02-28 01:06:33.410906 | orchestrator | [WARNING]: Skipped 2026-02-28 01:06:33.410911 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-28 01:06:33.410916 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2026-02-28 01:06:33.410921 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-28 01:06:33.410926 | orchestrator | node-3/prometheus.yml.d' is not a directory 2026-02-28 01:06:33.410931 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-28 01:06:33.410935 | orchestrator | [WARNING]: Skipped 2026-02-28 01:06:33.410940 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-28 01:06:33.410945 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2026-02-28 01:06:33.410950 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-28 01:06:33.410955 | orchestrator | node-5/prometheus.yml.d' is not a directory 2026-02-28 01:06:33.410960 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-02-28 01:06:33.410965 | orchestrator | 2026-02-28 01:06:33.410969 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2026-02-28 01:06:33.410974 | orchestrator | Saturday 28 February 2026 01:03:51 +0000 (0:00:01.957) 0:00:34.233 ***** 2026-02-28 01:06:33.410979 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-02-28 01:06:33.410985 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:06:33.410990 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-02-28 01:06:33.410995 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:06:33.411000 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-02-28 01:06:33.411005 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:06:33.411010 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-02-28 01:06:33.411014 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:06:33.411019 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-02-28 01:06:33.411024 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:06:33.411036 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-02-28 01:06:33.411041 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:06:33.411046 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2026-02-28 01:06:33.411051 | orchestrator | 2026-02-28 01:06:33.411056 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2026-02-28 01:06:33.411061 | orchestrator | Saturday 28 February 2026 01:04:07 +0000 (0:00:16.689) 0:00:50.923 ***** 2026-02-28 01:06:33.411066 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-02-28 01:06:33.411070 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:06:33.411075 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-02-28 01:06:33.411080 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:06:33.411085 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-02-28 01:06:33.411090 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:06:33.411095 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-02-28 01:06:33.411099 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:06:33.411104 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-02-28 01:06:33.411109 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:06:33.411114 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-02-28 01:06:33.411119 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:06:33.411124 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2026-02-28 01:06:33.411129 | orchestrator | 2026-02-28 01:06:33.411133 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2026-02-28 01:06:33.411138 | orchestrator | Saturday 28 February 2026 01:04:11 +0000 (0:00:03.611) 0:00:54.534 ***** 2026-02-28 01:06:33.411143 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-02-28 01:06:33.411156 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-02-28 01:06:33.411162 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-02-28 01:06:33.411167 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:06:33.411172 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:06:33.411177 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:06:33.411181 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2026-02-28 01:06:33.411186 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-02-28 01:06:33.411191 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:06:33.411196 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-02-28 01:06:33.411201 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:06:33.411206 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-02-28 01:06:33.411211 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:06:33.411216 | orchestrator | 2026-02-28 01:06:33.411221 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2026-02-28 01:06:33.411226 | orchestrator | Saturday 28 February 2026 01:04:14 +0000 (0:00:03.060) 0:00:57.594 ***** 2026-02-28 01:06:33.411231 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-28 01:06:33.411239 | orchestrator | 2026-02-28 01:06:33.411245 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2026-02-28 01:06:33.411251 | orchestrator | Saturday 28 February 2026 01:04:15 +0000 (0:00:01.060) 0:00:58.654 ***** 2026-02-28 01:06:33.411257 | orchestrator | skipping: [testbed-manager] 2026-02-28 01:06:33.411263 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:06:33.411268 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:06:33.411273 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:06:33.411279 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:06:33.411284 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:06:33.411290 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:06:33.411296 | orchestrator | 2026-02-28 01:06:33.411301 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2026-02-28 01:06:33.411307 | orchestrator | Saturday 28 February 2026 01:04:16 +0000 (0:00:00.803) 0:00:59.457 ***** 2026-02-28 01:06:33.411312 | orchestrator | skipping: [testbed-manager] 2026-02-28 01:06:33.411318 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:06:33.411324 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:06:33.411329 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:06:33.411335 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:06:33.411340 | orchestrator | changed: [testbed-node-1] 2026-02-28 01:06:33.411346 | orchestrator | changed: [testbed-node-2] 2026-02-28 01:06:33.411351 | orchestrator | 2026-02-28 01:06:33.411357 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2026-02-28 01:06:33.411374 | orchestrator | Saturday 28 February 2026 01:04:19 +0000 (0:00:03.208) 0:01:02.666 ***** 2026-02-28 01:06:33.411380 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-02-28 01:06:33.411385 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-02-28 01:06:33.411391 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-02-28 01:06:33.411397 | orchestrator | skipping: [testbed-manager] 2026-02-28 01:06:33.411402 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:06:33.411408 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:06:33.411414 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-02-28 01:06:33.411420 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:06:33.411426 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-02-28 01:06:33.411431 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:06:33.411437 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-02-28 01:06:33.411442 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:06:33.411448 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-02-28 01:06:33.411454 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:06:33.411460 | orchestrator | 2026-02-28 01:06:33.411465 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2026-02-28 01:06:33.411471 | orchestrator | Saturday 28 February 2026 01:04:21 +0000 (0:00:02.398) 0:01:05.065 ***** 2026-02-28 01:06:33.411477 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-02-28 01:06:33.411488 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-02-28 01:06:33.411497 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:06:33.411559 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:06:33.411566 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-02-28 01:06:33.411572 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:06:33.411576 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-02-28 01:06:33.411587 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:06:33.411604 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-02-28 01:06:33.411609 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:06:33.411619 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2026-02-28 01:06:33.411624 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-02-28 01:06:33.411629 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:06:33.411634 | orchestrator | 2026-02-28 01:06:33.411639 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2026-02-28 01:06:33.411644 | orchestrator | Saturday 28 February 2026 01:04:24 +0000 (0:00:02.848) 0:01:07.914 ***** 2026-02-28 01:06:33.411649 | orchestrator | [WARNING]: Skipped 2026-02-28 01:06:33.411654 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2026-02-28 01:06:33.411659 | orchestrator | due to this access issue: 2026-02-28 01:06:33.411664 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2026-02-28 01:06:33.411669 | orchestrator | not a directory 2026-02-28 01:06:33.411674 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-28 01:06:33.411679 | orchestrator | 2026-02-28 01:06:33.411686 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2026-02-28 01:06:33.411694 | orchestrator | Saturday 28 February 2026 01:04:26 +0000 (0:00:01.966) 0:01:09.880 ***** 2026-02-28 01:06:33.411773 | orchestrator | skipping: [testbed-manager] 2026-02-28 01:06:33.411782 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:06:33.411791 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:06:33.411798 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:06:33.411807 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:06:33.411812 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:06:33.411817 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:06:33.411822 | orchestrator | 2026-02-28 01:06:33.411827 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2026-02-28 01:06:33.411832 | orchestrator | Saturday 28 February 2026 01:04:27 +0000 (0:00:01.011) 0:01:10.892 ***** 2026-02-28 01:06:33.411837 | orchestrator | skipping: [testbed-manager] 2026-02-28 01:06:33.411842 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:06:33.411847 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:06:33.411852 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:06:33.411856 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:06:33.411861 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:06:33.411866 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:06:33.411871 | orchestrator | 2026-02-28 01:06:33.411876 | orchestrator | TASK [service-check-containers : prometheus | Check containers] **************** 2026-02-28 01:06:33.411881 | orchestrator | Saturday 28 February 2026 01:04:28 +0000 (0:00:00.784) 0:01:11.676 ***** 2026-02-28 01:06:33.411886 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-28 01:06:33.411892 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-28 01:06:33.411903 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-28 01:06:33.411918 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-server:2025.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-02-28 01:06:33.411925 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-28 01:06:33.411930 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 01:06:33.411936 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 01:06:33.411941 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-28 01:06:33.411946 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-28 01:06:33.411956 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-28 01:06:33.411961 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 01:06:33.411971 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-28 01:06:33.411976 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 01:06:33.411981 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 01:06:33.411986 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-28 01:06:33.411992 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-28 01:06:33.412000 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-28 01:06:33.412033 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-28 01:06:33.412039 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 01:06:33.412052 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-28 01:06:33.412057 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-28 01:06:33.412062 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-28 01:06:33.412067 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-28 01:06:33.412072 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-28 01:06:33.412082 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2025.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-02-28 01:06:33.412090 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 01:06:33.412100 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 01:06:33.412105 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 01:06:33.412111 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 01:06:33.412116 | orchestrator | 2026-02-28 01:06:33.412121 | orchestrator | TASK [service-check-containers : prometheus | Notify handlers to restart containers] *** 2026-02-28 01:06:33.412126 | orchestrator | Saturday 28 February 2026 01:04:34 +0000 (0:00:06.200) 0:01:17.877 ***** 2026-02-28 01:06:33.412131 | orchestrator | changed: [testbed-manager] => { 2026-02-28 01:06:33.412136 | orchestrator |  "msg": "Notifying handlers" 2026-02-28 01:06:33.412141 | orchestrator | } 2026-02-28 01:06:33.412149 | orchestrator | changed: [testbed-node-0] => { 2026-02-28 01:06:33.412154 | orchestrator |  "msg": "Notifying handlers" 2026-02-28 01:06:33.412159 | orchestrator | } 2026-02-28 01:06:33.412164 | orchestrator | changed: [testbed-node-1] => { 2026-02-28 01:06:33.412169 | orchestrator |  "msg": "Notifying handlers" 2026-02-28 01:06:33.412174 | orchestrator | } 2026-02-28 01:06:33.412179 | orchestrator | changed: [testbed-node-2] => { 2026-02-28 01:06:33.412184 | orchestrator |  "msg": "Notifying handlers" 2026-02-28 01:06:33.412189 | orchestrator | } 2026-02-28 01:06:33.412193 | orchestrator | changed: [testbed-node-3] => { 2026-02-28 01:06:33.412198 | orchestrator |  "msg": "Notifying handlers" 2026-02-28 01:06:33.412203 | orchestrator | } 2026-02-28 01:06:33.412208 | orchestrator | changed: [testbed-node-4] => { 2026-02-28 01:06:33.412213 | orchestrator |  "msg": "Notifying handlers" 2026-02-28 01:06:33.412218 | orchestrator | } 2026-02-28 01:06:33.412222 | orchestrator | changed: [testbed-node-5] => { 2026-02-28 01:06:33.412227 | orchestrator |  "msg": "Notifying handlers" 2026-02-28 01:06:33.412232 | orchestrator | } 2026-02-28 01:06:33.412237 | orchestrator | 2026-02-28 01:06:33.412242 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-28 01:06:33.412247 | orchestrator | Saturday 28 February 2026 01:04:36 +0000 (0:00:01.367) 0:01:19.244 ***** 2026-02-28 01:06:33.412252 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-28 01:06:33.412257 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 01:06:33.412265 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-28 01:06:33.412275 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 01:06:33.412280 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 01:06:33.412286 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 01:06:33.412297 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-28 01:06:33.412302 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-28 01:06:33.412307 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 01:06:33.412313 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 01:06:33.412324 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-server:2025.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-02-28 01:06:33.412330 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:06:33.412335 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-28 01:06:33.412345 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-28 01:06:33.412351 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2025.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-02-28 01:06:33.412356 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 01:06:33.412361 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:06:33.412366 | orchestrator | skipping: [testbed-manager] 2026-02-28 01:06:33.412371 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-28 01:06:33.412383 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 01:06:33.412388 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 01:06:33.412399 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-28 01:06:33.412404 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 01:06:33.412409 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:06:33.412415 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-28 01:06:33.412420 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-28 01:06:33.412425 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-28 01:06:33.412430 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:06:33.412435 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-28 01:06:33.412447 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-28 01:06:33.412456 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-28 01:06:33.412461 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:06:33.412470 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-28 01:06:33.412475 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-28 01:06:33.412480 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-28 01:06:33.412485 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:06:33.412490 | orchestrator | 2026-02-28 01:06:33.412495 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2026-02-28 01:06:33.412500 | orchestrator | Saturday 28 February 2026 01:04:39 +0000 (0:00:03.221) 0:01:22.466 ***** 2026-02-28 01:06:33.412505 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-02-28 01:06:33.412510 | orchestrator | skipping: [testbed-manager] 2026-02-28 01:06:33.412515 | orchestrator | 2026-02-28 01:06:33.412520 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-02-28 01:06:33.412525 | orchestrator | Saturday 28 February 2026 01:04:41 +0000 (0:00:02.394) 0:01:24.861 ***** 2026-02-28 01:06:33.412530 | orchestrator | 2026-02-28 01:06:33.412535 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-02-28 01:06:33.412540 | orchestrator | Saturday 28 February 2026 01:04:41 +0000 (0:00:00.103) 0:01:24.964 ***** 2026-02-28 01:06:33.412545 | orchestrator | 2026-02-28 01:06:33.412550 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-02-28 01:06:33.412554 | orchestrator | Saturday 28 February 2026 01:04:41 +0000 (0:00:00.072) 0:01:25.036 ***** 2026-02-28 01:06:33.412559 | orchestrator | 2026-02-28 01:06:33.412564 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-02-28 01:06:33.412569 | orchestrator | Saturday 28 February 2026 01:04:41 +0000 (0:00:00.075) 0:01:25.112 ***** 2026-02-28 01:06:33.412574 | orchestrator | 2026-02-28 01:06:33.412578 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-02-28 01:06:33.412583 | orchestrator | Saturday 28 February 2026 01:04:42 +0000 (0:00:00.101) 0:01:25.213 ***** 2026-02-28 01:06:33.412593 | orchestrator | 2026-02-28 01:06:33.412598 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-02-28 01:06:33.412602 | orchestrator | Saturday 28 February 2026 01:04:42 +0000 (0:00:00.076) 0:01:25.290 ***** 2026-02-28 01:06:33.412607 | orchestrator | 2026-02-28 01:06:33.412612 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-02-28 01:06:33.412617 | orchestrator | Saturday 28 February 2026 01:04:42 +0000 (0:00:00.067) 0:01:25.357 ***** 2026-02-28 01:06:33.412622 | orchestrator | 2026-02-28 01:06:33.412629 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2026-02-28 01:06:33.412637 | orchestrator | Saturday 28 February 2026 01:04:42 +0000 (0:00:00.413) 0:01:25.771 ***** 2026-02-28 01:06:33.412642 | orchestrator | changed: [testbed-manager] 2026-02-28 01:06:33.412647 | orchestrator | 2026-02-28 01:06:33.412652 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2026-02-28 01:06:33.412657 | orchestrator | Saturday 28 February 2026 01:05:04 +0000 (0:00:21.494) 0:01:47.265 ***** 2026-02-28 01:06:33.412662 | orchestrator | changed: [testbed-manager] 2026-02-28 01:06:33.412667 | orchestrator | changed: [testbed-node-2] 2026-02-28 01:06:33.412672 | orchestrator | changed: [testbed-node-3] 2026-02-28 01:06:33.412677 | orchestrator | changed: [testbed-node-5] 2026-02-28 01:06:33.412681 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:06:33.412686 | orchestrator | changed: [testbed-node-4] 2026-02-28 01:06:33.412691 | orchestrator | changed: [testbed-node-1] 2026-02-28 01:06:33.412741 | orchestrator | 2026-02-28 01:06:33.412748 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2026-02-28 01:06:33.412753 | orchestrator | Saturday 28 February 2026 01:05:18 +0000 (0:00:14.121) 0:02:01.387 ***** 2026-02-28 01:06:33.412758 | orchestrator | changed: [testbed-node-2] 2026-02-28 01:06:33.412763 | orchestrator | changed: [testbed-node-1] 2026-02-28 01:06:33.412768 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:06:33.412773 | orchestrator | 2026-02-28 01:06:33.412778 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2026-02-28 01:06:33.412783 | orchestrator | Saturday 28 February 2026 01:05:25 +0000 (0:00:07.371) 0:02:08.759 ***** 2026-02-28 01:06:33.412787 | orchestrator | changed: [testbed-node-1] 2026-02-28 01:06:33.412792 | orchestrator | changed: [testbed-node-2] 2026-02-28 01:06:33.412797 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:06:33.412802 | orchestrator | 2026-02-28 01:06:33.412807 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2026-02-28 01:06:33.412812 | orchestrator | Saturday 28 February 2026 01:05:37 +0000 (0:00:11.671) 0:02:20.431 ***** 2026-02-28 01:06:33.412817 | orchestrator | changed: [testbed-manager] 2026-02-28 01:06:33.412821 | orchestrator | changed: [testbed-node-1] 2026-02-28 01:06:33.412826 | orchestrator | changed: [testbed-node-2] 2026-02-28 01:06:33.412831 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:06:33.412836 | orchestrator | changed: [testbed-node-4] 2026-02-28 01:06:33.412841 | orchestrator | changed: [testbed-node-3] 2026-02-28 01:06:33.412846 | orchestrator | changed: [testbed-node-5] 2026-02-28 01:06:33.412851 | orchestrator | 2026-02-28 01:06:33.412855 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-alertmanager container] ******* 2026-02-28 01:06:33.412860 | orchestrator | Saturday 28 February 2026 01:05:55 +0000 (0:00:18.215) 0:02:38.646 ***** 2026-02-28 01:06:33.412865 | orchestrator | changed: [testbed-manager] 2026-02-28 01:06:33.412870 | orchestrator | 2026-02-28 01:06:33.412875 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2026-02-28 01:06:33.412880 | orchestrator | Saturday 28 February 2026 01:06:04 +0000 (0:00:09.269) 0:02:47.916 ***** 2026-02-28 01:06:33.412885 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:06:33.412890 | orchestrator | changed: [testbed-node-1] 2026-02-28 01:06:33.412895 | orchestrator | changed: [testbed-node-2] 2026-02-28 01:06:33.412899 | orchestrator | 2026-02-28 01:06:33.412904 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-blackbox-exporter container] *** 2026-02-28 01:06:33.412914 | orchestrator | Saturday 28 February 2026 01:06:15 +0000 (0:00:10.585) 0:02:58.501 ***** 2026-02-28 01:06:33.412919 | orchestrator | changed: [testbed-manager] 2026-02-28 01:06:33.412924 | orchestrator | 2026-02-28 01:06:33.412929 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2026-02-28 01:06:33.412934 | orchestrator | Saturday 28 February 2026 01:06:25 +0000 (0:00:10.376) 0:03:08.878 ***** 2026-02-28 01:06:33.412939 | orchestrator | changed: [testbed-node-4] 2026-02-28 01:06:33.412944 | orchestrator | changed: [testbed-node-3] 2026-02-28 01:06:33.412948 | orchestrator | changed: [testbed-node-5] 2026-02-28 01:06:33.412953 | orchestrator | 2026-02-28 01:06:33.412958 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-28 01:06:33.412963 | orchestrator | testbed-manager : ok=23  changed=14  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2026-02-28 01:06:33.412970 | orchestrator | testbed-node-0 : ok=16  changed=11  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-02-28 01:06:33.412975 | orchestrator | testbed-node-1 : ok=16  changed=11  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-02-28 01:06:33.412980 | orchestrator | testbed-node-2 : ok=16  changed=11  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-02-28 01:06:33.412984 | orchestrator | testbed-node-3 : ok=13  changed=8  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2026-02-28 01:06:33.412989 | orchestrator | testbed-node-4 : ok=13  changed=8  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2026-02-28 01:06:33.412994 | orchestrator | testbed-node-5 : ok=13  changed=8  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2026-02-28 01:06:33.412999 | orchestrator | 2026-02-28 01:06:33.413004 | orchestrator | 2026-02-28 01:06:33.413009 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-28 01:06:33.413014 | orchestrator | Saturday 28 February 2026 01:06:32 +0000 (0:00:07.182) 0:03:16.061 ***** 2026-02-28 01:06:33.413019 | orchestrator | =============================================================================== 2026-02-28 01:06:33.413024 | orchestrator | prometheus : Restart prometheus-server container ----------------------- 21.49s 2026-02-28 01:06:33.413032 | orchestrator | prometheus : Restart prometheus-cadvisor container --------------------- 18.22s 2026-02-28 01:06:33.413040 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 16.69s 2026-02-28 01:06:33.413045 | orchestrator | prometheus : Restart prometheus-node-exporter container ---------------- 14.12s 2026-02-28 01:06:33.413050 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ----------- 11.67s 2026-02-28 01:06:33.413055 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container ------- 10.59s 2026-02-28 01:06:33.413060 | orchestrator | prometheus : Restart prometheus-blackbox-exporter container ------------ 10.38s 2026-02-28 01:06:33.413065 | orchestrator | prometheus : Restart prometheus-alertmanager container ------------------ 9.27s 2026-02-28 01:06:33.413070 | orchestrator | prometheus : Copying over config.json files ----------------------------- 7.54s 2026-02-28 01:06:33.413074 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container --------------- 7.37s 2026-02-28 01:06:33.413079 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container -------------- 7.18s 2026-02-28 01:06:33.413084 | orchestrator | service-check-containers : prometheus | Check containers ---------------- 6.20s 2026-02-28 01:06:33.413089 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 5.80s 2026-02-28 01:06:33.413094 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 3.84s 2026-02-28 01:06:33.413099 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 3.61s 2026-02-28 01:06:33.413107 | orchestrator | service-cert-copy : prometheus | Copying over backend internal TLS key --- 3.59s 2026-02-28 01:06:33.413112 | orchestrator | service-cert-copy : prometheus | Copying over backend internal TLS certificate --- 3.23s 2026-02-28 01:06:33.413117 | orchestrator | service-check-containers : Include tasks -------------------------------- 3.22s 2026-02-28 01:06:33.413122 | orchestrator | prometheus : Copying over my.cnf for mysqld_exporter -------------------- 3.21s 2026-02-28 01:06:33.413126 | orchestrator | prometheus : Copying over prometheus alertmanager config file ----------- 3.06s 2026-02-28 01:06:33.413131 | orchestrator | 2026-02-28 01:06:33 | INFO  | Task c9bc6f62-00b4-4207-b064-516647d36890 is in state STARTED 2026-02-28 01:06:33.413136 | orchestrator | 2026-02-28 01:06:33 | INFO  | Task 7412b3b9-a78a-42fe-89ac-55c2d9ab9531 is in state STARTED 2026-02-28 01:06:33.413141 | orchestrator | 2026-02-28 01:06:33 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:06:33.413146 | orchestrator | 2026-02-28 01:06:33 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:06:36.442451 | orchestrator | 2026-02-28 01:06:36 | INFO  | Task c9bc6f62-00b4-4207-b064-516647d36890 is in state STARTED 2026-02-28 01:06:36.445170 | orchestrator | 2026-02-28 01:06:36 | INFO  | Task c15b47cf-1c0f-42de-b373-e0eb82491ed1 is in state STARTED 2026-02-28 01:06:36.446139 | orchestrator | 2026-02-28 01:06:36 | INFO  | Task 7412b3b9-a78a-42fe-89ac-55c2d9ab9531 is in state STARTED 2026-02-28 01:06:36.446668 | orchestrator | 2026-02-28 01:06:36 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:06:36.446739 | orchestrator | 2026-02-28 01:06:36 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:06:39.476487 | orchestrator | 2026-02-28 01:06:39 | INFO  | Task c9bc6f62-00b4-4207-b064-516647d36890 is in state STARTED 2026-02-28 01:06:39.477063 | orchestrator | 2026-02-28 01:06:39 | INFO  | Task c15b47cf-1c0f-42de-b373-e0eb82491ed1 is in state STARTED 2026-02-28 01:06:39.478308 | orchestrator | 2026-02-28 01:06:39 | INFO  | Task 7412b3b9-a78a-42fe-89ac-55c2d9ab9531 is in state STARTED 2026-02-28 01:06:39.479827 | orchestrator | 2026-02-28 01:06:39 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:06:39.479884 | orchestrator | 2026-02-28 01:06:39 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:06:42.554008 | orchestrator | 2026-02-28 01:06:42 | INFO  | Task c9bc6f62-00b4-4207-b064-516647d36890 is in state STARTED 2026-02-28 01:06:42.554934 | orchestrator | 2026-02-28 01:06:42 | INFO  | Task c15b47cf-1c0f-42de-b373-e0eb82491ed1 is in state STARTED 2026-02-28 01:06:42.556202 | orchestrator | 2026-02-28 01:06:42 | INFO  | Task 7412b3b9-a78a-42fe-89ac-55c2d9ab9531 is in state STARTED 2026-02-28 01:06:42.557508 | orchestrator | 2026-02-28 01:06:42 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:06:42.557531 | orchestrator | 2026-02-28 01:06:42 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:06:45.602912 | orchestrator | 2026-02-28 01:06:45 | INFO  | Task c9bc6f62-00b4-4207-b064-516647d36890 is in state STARTED 2026-02-28 01:06:45.603754 | orchestrator | 2026-02-28 01:06:45 | INFO  | Task c15b47cf-1c0f-42de-b373-e0eb82491ed1 is in state STARTED 2026-02-28 01:06:45.604321 | orchestrator | 2026-02-28 01:06:45 | INFO  | Task 7412b3b9-a78a-42fe-89ac-55c2d9ab9531 is in state STARTED 2026-02-28 01:06:45.605389 | orchestrator | 2026-02-28 01:06:45 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:06:45.605420 | orchestrator | 2026-02-28 01:06:45 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:06:48.643628 | orchestrator | 2026-02-28 01:06:48 | INFO  | Task c9bc6f62-00b4-4207-b064-516647d36890 is in state STARTED 2026-02-28 01:06:48.644158 | orchestrator | 2026-02-28 01:06:48 | INFO  | Task c15b47cf-1c0f-42de-b373-e0eb82491ed1 is in state STARTED 2026-02-28 01:06:48.645070 | orchestrator | 2026-02-28 01:06:48 | INFO  | Task 7412b3b9-a78a-42fe-89ac-55c2d9ab9531 is in state STARTED 2026-02-28 01:06:48.645997 | orchestrator | 2026-02-28 01:06:48 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:06:48.646084 | orchestrator | 2026-02-28 01:06:48 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:06:51.698337 | orchestrator | 2026-02-28 01:06:51 | INFO  | Task c9bc6f62-00b4-4207-b064-516647d36890 is in state STARTED 2026-02-28 01:06:51.698417 | orchestrator | 2026-02-28 01:06:51 | INFO  | Task c15b47cf-1c0f-42de-b373-e0eb82491ed1 is in state STARTED 2026-02-28 01:06:51.698424 | orchestrator | 2026-02-28 01:06:51 | INFO  | Task 7412b3b9-a78a-42fe-89ac-55c2d9ab9531 is in state STARTED 2026-02-28 01:06:51.698428 | orchestrator | 2026-02-28 01:06:51 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:06:51.698433 | orchestrator | 2026-02-28 01:06:51 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:06:54.740138 | orchestrator | 2026-02-28 01:06:54 | INFO  | Task c9bc6f62-00b4-4207-b064-516647d36890 is in state STARTED 2026-02-28 01:06:54.741913 | orchestrator | 2026-02-28 01:06:54 | INFO  | Task c15b47cf-1c0f-42de-b373-e0eb82491ed1 is in state STARTED 2026-02-28 01:06:54.742969 | orchestrator | 2026-02-28 01:06:54 | INFO  | Task 7412b3b9-a78a-42fe-89ac-55c2d9ab9531 is in state STARTED 2026-02-28 01:06:54.744423 | orchestrator | 2026-02-28 01:06:54 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:06:54.744451 | orchestrator | 2026-02-28 01:06:54 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:06:57.782263 | orchestrator | 2026-02-28 01:06:57 | INFO  | Task c9bc6f62-00b4-4207-b064-516647d36890 is in state STARTED 2026-02-28 01:06:57.782340 | orchestrator | 2026-02-28 01:06:57 | INFO  | Task c15b47cf-1c0f-42de-b373-e0eb82491ed1 is in state STARTED 2026-02-28 01:06:57.783440 | orchestrator | 2026-02-28 01:06:57 | INFO  | Task 7412b3b9-a78a-42fe-89ac-55c2d9ab9531 is in state STARTED 2026-02-28 01:06:57.785984 | orchestrator | 2026-02-28 01:06:57 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:06:57.786066 | orchestrator | 2026-02-28 01:06:57 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:07:00.835749 | orchestrator | 2026-02-28 01:07:00 | INFO  | Task c9bc6f62-00b4-4207-b064-516647d36890 is in state STARTED 2026-02-28 01:07:00.841745 | orchestrator | 2026-02-28 01:07:00 | INFO  | Task c15b47cf-1c0f-42de-b373-e0eb82491ed1 is in state STARTED 2026-02-28 01:07:00.841824 | orchestrator | 2026-02-28 01:07:00 | INFO  | Task 7412b3b9-a78a-42fe-89ac-55c2d9ab9531 is in state STARTED 2026-02-28 01:07:00.841834 | orchestrator | 2026-02-28 01:07:00 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:07:00.841843 | orchestrator | 2026-02-28 01:07:00 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:07:03.877513 | orchestrator | 2026-02-28 01:07:03 | INFO  | Task c9bc6f62-00b4-4207-b064-516647d36890 is in state STARTED 2026-02-28 01:07:03.878132 | orchestrator | 2026-02-28 01:07:03 | INFO  | Task c15b47cf-1c0f-42de-b373-e0eb82491ed1 is in state STARTED 2026-02-28 01:07:03.879592 | orchestrator | 2026-02-28 01:07:03 | INFO  | Task 7412b3b9-a78a-42fe-89ac-55c2d9ab9531 is in state STARTED 2026-02-28 01:07:03.882462 | orchestrator | 2026-02-28 01:07:03 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:07:03.882561 | orchestrator | 2026-02-28 01:07:03 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:07:06.926218 | orchestrator | 2026-02-28 01:07:06 | INFO  | Task c9bc6f62-00b4-4207-b064-516647d36890 is in state STARTED 2026-02-28 01:07:06.928214 | orchestrator | 2026-02-28 01:07:06 | INFO  | Task c15b47cf-1c0f-42de-b373-e0eb82491ed1 is in state STARTED 2026-02-28 01:07:06.930340 | orchestrator | 2026-02-28 01:07:06 | INFO  | Task 7412b3b9-a78a-42fe-89ac-55c2d9ab9531 is in state STARTED 2026-02-28 01:07:06.931796 | orchestrator | 2026-02-28 01:07:06 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:07:06.931832 | orchestrator | 2026-02-28 01:07:06 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:07:09.988462 | orchestrator | 2026-02-28 01:07:09 | INFO  | Task c9bc6f62-00b4-4207-b064-516647d36890 is in state STARTED 2026-02-28 01:07:09.989874 | orchestrator | 2026-02-28 01:07:09 | INFO  | Task c15b47cf-1c0f-42de-b373-e0eb82491ed1 is in state STARTED 2026-02-28 01:07:09.991627 | orchestrator | 2026-02-28 01:07:09 | INFO  | Task 9b2f3619-d4ca-45c0-a7ab-ed503e9261db is in state STARTED 2026-02-28 01:07:09.996345 | orchestrator | 2026-02-28 01:07:09.996404 | orchestrator | 2026-02-28 01:07:09.996425 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-28 01:07:09.996444 | orchestrator | 2026-02-28 01:07:09.996459 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-28 01:07:09.996475 | orchestrator | Saturday 28 February 2026 01:03:35 +0000 (0:00:00.393) 0:00:00.393 ***** 2026-02-28 01:07:09.996491 | orchestrator | ok: [testbed-node-0] 2026-02-28 01:07:09.996502 | orchestrator | ok: [testbed-node-1] 2026-02-28 01:07:09.996511 | orchestrator | ok: [testbed-node-2] 2026-02-28 01:07:09.996520 | orchestrator | 2026-02-28 01:07:09.996530 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-28 01:07:09.996539 | orchestrator | Saturday 28 February 2026 01:03:36 +0000 (0:00:00.618) 0:00:01.011 ***** 2026-02-28 01:07:09.996548 | orchestrator | ok: [testbed-node-0] => (item=enable_glance_True) 2026-02-28 01:07:09.996558 | orchestrator | ok: [testbed-node-1] => (item=enable_glance_True) 2026-02-28 01:07:09.996567 | orchestrator | ok: [testbed-node-2] => (item=enable_glance_True) 2026-02-28 01:07:09.996576 | orchestrator | 2026-02-28 01:07:09.996585 | orchestrator | PLAY [Apply role glance] ******************************************************* 2026-02-28 01:07:09.996594 | orchestrator | 2026-02-28 01:07:09.996604 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-02-28 01:07:09.996613 | orchestrator | Saturday 28 February 2026 01:03:37 +0000 (0:00:00.968) 0:00:01.980 ***** 2026-02-28 01:07:09.996622 | orchestrator | included: /ansible/roles/glance/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 01:07:09.996632 | orchestrator | 2026-02-28 01:07:09.996641 | orchestrator | TASK [service-ks-register : glance | Creating/deleting services] *************** 2026-02-28 01:07:09.996650 | orchestrator | Saturday 28 February 2026 01:03:37 +0000 (0:00:00.793) 0:00:02.774 ***** 2026-02-28 01:07:09.996659 | orchestrator | changed: [testbed-node-0] => (item=glance (image)) 2026-02-28 01:07:09.996681 | orchestrator | 2026-02-28 01:07:09.996690 | orchestrator | TASK [service-ks-register : glance | Creating/deleting endpoints] ************** 2026-02-28 01:07:09.996759 | orchestrator | Saturday 28 February 2026 01:03:42 +0000 (0:00:04.489) 0:00:07.264 ***** 2026-02-28 01:07:09.996769 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api-int.testbed.osism.xyz:9292 -> internal) 2026-02-28 01:07:09.996779 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api.testbed.osism.xyz:9292 -> public) 2026-02-28 01:07:09.996788 | orchestrator | 2026-02-28 01:07:09.996796 | orchestrator | TASK [service-ks-register : glance | Creating projects] ************************ 2026-02-28 01:07:09.996806 | orchestrator | Saturday 28 February 2026 01:03:49 +0000 (0:00:07.635) 0:00:14.899 ***** 2026-02-28 01:07:09.996840 | orchestrator | changed: [testbed-node-0] => (item=service) 2026-02-28 01:07:09.996852 | orchestrator | 2026-02-28 01:07:09.996867 | orchestrator | TASK [service-ks-register : glance | Creating users] *************************** 2026-02-28 01:07:09.996888 | orchestrator | Saturday 28 February 2026 01:03:53 +0000 (0:00:03.366) 0:00:18.265 ***** 2026-02-28 01:07:09.996903 | orchestrator | changed: [testbed-node-0] => (item=glance -> service) 2026-02-28 01:07:09.996918 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-28 01:07:09.996932 | orchestrator | 2026-02-28 01:07:09.996946 | orchestrator | TASK [service-ks-register : glance | Creating roles] *************************** 2026-02-28 01:07:09.996961 | orchestrator | Saturday 28 February 2026 01:03:57 +0000 (0:00:04.191) 0:00:22.457 ***** 2026-02-28 01:07:09.996974 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-28 01:07:09.996989 | orchestrator | 2026-02-28 01:07:09.997004 | orchestrator | TASK [service-ks-register : glance | Granting/revoking user roles] ************* 2026-02-28 01:07:09.997019 | orchestrator | Saturday 28 February 2026 01:04:01 +0000 (0:00:03.645) 0:00:26.103 ***** 2026-02-28 01:07:09.997036 | orchestrator | changed: [testbed-node-0] => (item=glance -> service -> admin) 2026-02-28 01:07:09.997051 | orchestrator | 2026-02-28 01:07:09.997067 | orchestrator | TASK [glance : Ensuring config directories exist] ****************************** 2026-02-28 01:07:09.997082 | orchestrator | Saturday 28 February 2026 01:04:05 +0000 (0:00:03.907) 0:00:30.010 ***** 2026-02-28 01:07:09.997146 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-28 01:07:09.997172 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-28 01:07:09.997218 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-28 01:07:09.997238 | orchestrator | 2026-02-28 01:07:09.997254 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-02-28 01:07:09.997269 | orchestrator | Saturday 28 February 2026 01:04:08 +0000 (0:00:03.606) 0:00:33.619 ***** 2026-02-28 01:07:09.997291 | orchestrator | included: /ansible/roles/glance/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 01:07:09.997302 | orchestrator | 2026-02-28 01:07:09.997311 | orchestrator | TASK [glance : Ensuring glance service ceph config subdir exists] ************** 2026-02-28 01:07:09.997320 | orchestrator | Saturday 28 February 2026 01:04:09 +0000 (0:00:00.929) 0:00:34.549 ***** 2026-02-28 01:07:09.997329 | orchestrator | changed: [testbed-node-1] 2026-02-28 01:07:09.997338 | orchestrator | changed: [testbed-node-2] 2026-02-28 01:07:09.997346 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:07:09.997355 | orchestrator | 2026-02-28 01:07:09.997364 | orchestrator | TASK [glance : Copy over multiple ceph configs for Glance] ********************* 2026-02-28 01:07:09.997373 | orchestrator | Saturday 28 February 2026 01:04:14 +0000 (0:00:05.311) 0:00:39.860 ***** 2026-02-28 01:07:09.997382 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'pool': 'images', 'user': 'glance', 'enabled': True}) 2026-02-28 01:07:09.997392 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'pool': 'images', 'user': 'glance', 'enabled': True}) 2026-02-28 01:07:09.997408 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'pool': 'images', 'user': 'glance', 'enabled': True}) 2026-02-28 01:07:09.997417 | orchestrator | 2026-02-28 01:07:09.997425 | orchestrator | TASK [glance : Copy over ceph Glance keyrings] ********************************* 2026-02-28 01:07:09.997434 | orchestrator | Saturday 28 February 2026 01:04:16 +0000 (0:00:01.898) 0:00:41.759 ***** 2026-02-28 01:07:09.997443 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'pool': 'images', 'user': 'glance', 'enabled': True}) 2026-02-28 01:07:09.997452 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'pool': 'images', 'user': 'glance', 'enabled': True}) 2026-02-28 01:07:09.997461 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'pool': 'images', 'user': 'glance', 'enabled': True}) 2026-02-28 01:07:09.997470 | orchestrator | 2026-02-28 01:07:09.997478 | orchestrator | TASK [glance : Ensuring config directory has correct owner and permission] ***** 2026-02-28 01:07:09.997487 | orchestrator | Saturday 28 February 2026 01:04:18 +0000 (0:00:01.577) 0:00:43.337 ***** 2026-02-28 01:07:09.997496 | orchestrator | ok: [testbed-node-0] 2026-02-28 01:07:09.997505 | orchestrator | ok: [testbed-node-1] 2026-02-28 01:07:09.997514 | orchestrator | ok: [testbed-node-2] 2026-02-28 01:07:09.997522 | orchestrator | 2026-02-28 01:07:09.997532 | orchestrator | TASK [glance : Check if policies shall be overwritten] ************************* 2026-02-28 01:07:09.997541 | orchestrator | Saturday 28 February 2026 01:04:19 +0000 (0:00:00.983) 0:00:44.320 ***** 2026-02-28 01:07:09.997550 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:07:09.997558 | orchestrator | 2026-02-28 01:07:09.997567 | orchestrator | TASK [glance : Set glance policy file] ***************************************** 2026-02-28 01:07:09.997576 | orchestrator | Saturday 28 February 2026 01:04:19 +0000 (0:00:00.166) 0:00:44.487 ***** 2026-02-28 01:07:09.997585 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:07:09.997595 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:07:09.997610 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:07:09.997631 | orchestrator | 2026-02-28 01:07:09.997647 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-02-28 01:07:09.997661 | orchestrator | Saturday 28 February 2026 01:04:19 +0000 (0:00:00.451) 0:00:44.938 ***** 2026-02-28 01:07:09.997676 | orchestrator | included: /ansible/roles/glance/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 01:07:09.997689 | orchestrator | 2026-02-28 01:07:09.997728 | orchestrator | TASK [service-cert-copy : glance | Copying over extra CA certificates] ********* 2026-02-28 01:07:09.997742 | orchestrator | Saturday 28 February 2026 01:04:21 +0000 (0:00:01.321) 0:00:46.260 ***** 2026-02-28 01:07:09.997778 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-28 01:07:09.997810 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-28 01:07:09.997828 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-28 01:07:09.997838 | orchestrator | 2026-02-28 01:07:09.997847 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS certificate] *** 2026-02-28 01:07:09.997862 | orchestrator | Saturday 28 February 2026 01:04:28 +0000 (0:00:06.701) 0:00:52.961 ***** 2026-02-28 01:07:09.997880 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-28 01:07:09.997890 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:07:09.997900 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-28 01:07:09.997910 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:07:09.997931 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-28 01:07:09.997947 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:07:09.997957 | orchestrator | 2026-02-28 01:07:09.997966 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS key] ****** 2026-02-28 01:07:09.997976 | orchestrator | Saturday 28 February 2026 01:04:33 +0000 (0:00:05.090) 0:00:58.051 ***** 2026-02-28 01:07:09.997992 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-28 01:07:09.998008 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:07:09.998098 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-28 01:07:09.998126 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:07:09.998142 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-28 01:07:09.998157 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:07:09.998172 | orchestrator | 2026-02-28 01:07:09.998187 | orchestrator | TASK [glance : Creating TLS backend PEM File] ********************************** 2026-02-28 01:07:09.998203 | orchestrator | Saturday 28 February 2026 01:04:37 +0000 (0:00:04.806) 0:01:02.858 ***** 2026-02-28 01:07:09.998218 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:07:09.998234 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:07:09.998249 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:07:09.998263 | orchestrator | 2026-02-28 01:07:09.998279 | orchestrator | TASK [glance : Copying over config.json files for services] ******************** 2026-02-28 01:07:09.998295 | orchestrator | Saturday 28 February 2026 01:04:43 +0000 (0:00:05.558) 0:01:08.416 ***** 2026-02-28 01:07:09.998337 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-28 01:07:09.998367 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-28 01:07:09.998386 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-28 01:07:09.998403 | orchestrator | 2026-02-28 01:07:09.998418 | orchestrator | TASK [glance : Copyi2026-02-28 01:07:09 | INFO  | Task 7412b3b9-a78a-42fe-89ac-55c2d9ab9531 is in state SUCCESS 2026-02-28 01:07:09.998429 | orchestrator | ng over glance-api.conf] *********************************** 2026-02-28 01:07:09.998438 | orchestrator | Saturday 28 February 2026 01:04:50 +0000 (0:00:06.644) 0:01:15.060 ***** 2026-02-28 01:07:09.998447 | orchestrator | changed: [testbed-node-1] 2026-02-28 01:07:09.998456 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:07:09.998465 | orchestrator | changed: [testbed-node-2] 2026-02-28 01:07:09.998474 | orchestrator | 2026-02-28 01:07:09.998483 | orchestrator | TASK [glance : Copying over glance-cache.conf for glance_api] ****************** 2026-02-28 01:07:09.998492 | orchestrator | Saturday 28 February 2026 01:04:58 +0000 (0:00:08.691) 0:01:23.751 ***** 2026-02-28 01:07:09.998500 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:07:09.998509 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:07:09.998518 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:07:09.998527 | orchestrator | 2026-02-28 01:07:09.998536 | orchestrator | TASK [glance : Copying over glance-image-import.conf] ************************** 2026-02-28 01:07:09.998544 | orchestrator | Saturday 28 February 2026 01:05:03 +0000 (0:00:04.435) 0:01:28.187 ***** 2026-02-28 01:07:09.998553 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:07:09.998562 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:07:09.998571 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:07:09.998580 | orchestrator | 2026-02-28 01:07:09.998589 | orchestrator | TASK [glance : Copying over property-protections-rules.conf] ******************* 2026-02-28 01:07:09.998598 | orchestrator | Saturday 28 February 2026 01:05:10 +0000 (0:00:07.211) 0:01:35.399 ***** 2026-02-28 01:07:09.998606 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:07:09.998615 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:07:09.998624 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:07:09.998633 | orchestrator | 2026-02-28 01:07:09.998642 | orchestrator | TASK [glance : Copying over existing policy file] ****************************** 2026-02-28 01:07:09.998650 | orchestrator | Saturday 28 February 2026 01:05:18 +0000 (0:00:08.296) 0:01:43.696 ***** 2026-02-28 01:07:09.998659 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:07:09.998668 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:07:09.998677 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:07:09.998686 | orchestrator | 2026-02-28 01:07:09.998719 | orchestrator | TASK [glance : Copying over glance-haproxy-tls.cfg] **************************** 2026-02-28 01:07:09.998732 | orchestrator | Saturday 28 February 2026 01:05:19 +0000 (0:00:00.510) 0:01:44.206 ***** 2026-02-28 01:07:09.998747 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-02-28 01:07:09.998762 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:07:09.998777 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-02-28 01:07:09.998792 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:07:09.998806 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-02-28 01:07:09.998831 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:07:09.998848 | orchestrator | 2026-02-28 01:07:09.998864 | orchestrator | TASK [glance : Generating 'hostnqn' file for glance_api] *********************** 2026-02-28 01:07:09.998879 | orchestrator | Saturday 28 February 2026 01:05:23 +0000 (0:00:04.656) 0:01:48.862 ***** 2026-02-28 01:07:09.998895 | orchestrator | changed: [testbed-node-2] 2026-02-28 01:07:09.998910 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:07:09.998926 | orchestrator | changed: [testbed-node-1] 2026-02-28 01:07:09.998941 | orchestrator | 2026-02-28 01:07:09.998957 | orchestrator | TASK [service-check-containers : glance | Check containers] ******************** 2026-02-28 01:07:09.998973 | orchestrator | Saturday 28 February 2026 01:05:31 +0000 (0:00:07.264) 0:01:56.127 ***** 2026-02-28 01:07:09.999007 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-28 01:07:09.999027 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-28 01:07:09.999053 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-28 01:07:09.999071 | orchestrator | 2026-02-28 01:07:09.999094 | orchestrator | TASK [service-check-containers : glance | Notify handlers to restart containers] *** 2026-02-28 01:07:09.999111 | orchestrator | Saturday 28 February 2026 01:05:36 +0000 (0:00:05.282) 0:02:01.410 ***** 2026-02-28 01:07:09.999126 | orchestrator | changed: [testbed-node-0] => { 2026-02-28 01:07:09.999142 | orchestrator |  "msg": "Notifying handlers" 2026-02-28 01:07:09.999158 | orchestrator | } 2026-02-28 01:07:09.999174 | orchestrator | changed: [testbed-node-1] => { 2026-02-28 01:07:09.999189 | orchestrator |  "msg": "Notifying handlers" 2026-02-28 01:07:09.999205 | orchestrator | } 2026-02-28 01:07:09.999221 | orchestrator | changed: [testbed-node-2] => { 2026-02-28 01:07:09.999236 | orchestrator |  "msg": "Notifying handlers" 2026-02-28 01:07:09.999251 | orchestrator | } 2026-02-28 01:07:09.999268 | orchestrator | 2026-02-28 01:07:09.999291 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-28 01:07:09.999306 | orchestrator | Saturday 28 February 2026 01:05:36 +0000 (0:00:00.438) 0:02:01.848 ***** 2026-02-28 01:07:09.999323 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-28 01:07:09.999350 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:07:09.999374 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-28 01:07:09.999392 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:07:09.999419 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-28 01:07:09.999449 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:07:09.999465 | orchestrator | 2026-02-28 01:07:09.999482 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-02-28 01:07:09.999497 | orchestrator | Saturday 28 February 2026 01:05:45 +0000 (0:00:08.850) 0:02:10.699 ***** 2026-02-28 01:07:09.999512 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:07:09.999527 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:07:09.999543 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:07:09.999558 | orchestrator | 2026-02-28 01:07:09.999573 | orchestrator | TASK [glance : Creating Glance database] *************************************** 2026-02-28 01:07:09.999589 | orchestrator | Saturday 28 February 2026 01:05:46 +0000 (0:00:00.755) 0:02:11.454 ***** 2026-02-28 01:07:09.999604 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:07:09.999620 | orchestrator | 2026-02-28 01:07:09.999635 | orchestrator | TASK [glance : Creating Glance database user and setting permissions] ********** 2026-02-28 01:07:09.999651 | orchestrator | Saturday 28 February 2026 01:05:48 +0000 (0:00:01.981) 0:02:13.436 ***** 2026-02-28 01:07:09.999666 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:07:09.999682 | orchestrator | 2026-02-28 01:07:09.999717 | orchestrator | TASK [glance : Enable log_bin_trust_function_creators function] **************** 2026-02-28 01:07:09.999734 | orchestrator | Saturday 28 February 2026 01:05:50 +0000 (0:00:02.280) 0:02:15.716 ***** 2026-02-28 01:07:09.999743 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:07:09.999752 | orchestrator | 2026-02-28 01:07:09.999761 | orchestrator | TASK [glance : Running Glance bootstrap container] ***************************** 2026-02-28 01:07:09.999770 | orchestrator | Saturday 28 February 2026 01:05:53 +0000 (0:00:02.324) 0:02:18.041 ***** 2026-02-28 01:07:09.999778 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:07:09.999787 | orchestrator | 2026-02-28 01:07:09.999796 | orchestrator | TASK [glance : Disable log_bin_trust_function_creators function] *************** 2026-02-28 01:07:09.999804 | orchestrator | Saturday 28 February 2026 01:06:22 +0000 (0:00:28.979) 0:02:47.020 ***** 2026-02-28 01:07:09.999813 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:07:09.999822 | orchestrator | 2026-02-28 01:07:09.999830 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-02-28 01:07:09.999839 | orchestrator | Saturday 28 February 2026 01:06:24 +0000 (0:00:02.346) 0:02:49.367 ***** 2026-02-28 01:07:09.999848 | orchestrator | 2026-02-28 01:07:09.999857 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-02-28 01:07:09.999866 | orchestrator | Saturday 28 February 2026 01:06:24 +0000 (0:00:00.064) 0:02:49.432 ***** 2026-02-28 01:07:09.999874 | orchestrator | 2026-02-28 01:07:09.999883 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-02-28 01:07:09.999892 | orchestrator | Saturday 28 February 2026 01:06:24 +0000 (0:00:00.069) 0:02:49.501 ***** 2026-02-28 01:07:09.999900 | orchestrator | 2026-02-28 01:07:09.999909 | orchestrator | RUNNING HANDLER [glance : Restart glance-api container] ************************ 2026-02-28 01:07:09.999918 | orchestrator | Saturday 28 February 2026 01:06:24 +0000 (0:00:00.070) 0:02:49.572 ***** 2026-02-28 01:07:09.999927 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:07:09.999936 | orchestrator | changed: [testbed-node-1] 2026-02-28 01:07:09.999945 | orchestrator | changed: [testbed-node-2] 2026-02-28 01:07:09.999954 | orchestrator | 2026-02-28 01:07:09.999962 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-28 01:07:09.999977 | orchestrator | testbed-node-0 : ok=28  changed=21  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-02-28 01:07:09.999987 | orchestrator | testbed-node-1 : ok=17  changed=11  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-02-28 01:07:10.000006 | orchestrator | testbed-node-2 : ok=17  changed=11  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-02-28 01:07:10.000022 | orchestrator | 2026-02-28 01:07:10.000035 | orchestrator | 2026-02-28 01:07:10.000050 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-28 01:07:10.000074 | orchestrator | Saturday 28 February 2026 01:07:07 +0000 (0:00:42.852) 0:03:32.424 ***** 2026-02-28 01:07:10.000090 | orchestrator | =============================================================================== 2026-02-28 01:07:10.000104 | orchestrator | glance : Restart glance-api container ---------------------------------- 42.85s 2026-02-28 01:07:10.000120 | orchestrator | glance : Running Glance bootstrap container ---------------------------- 28.98s 2026-02-28 01:07:10.000134 | orchestrator | service-check-containers : Include tasks -------------------------------- 8.85s 2026-02-28 01:07:10.000149 | orchestrator | glance : Copying over glance-api.conf ----------------------------------- 8.69s 2026-02-28 01:07:10.000164 | orchestrator | glance : Copying over property-protections-rules.conf ------------------- 8.30s 2026-02-28 01:07:10.000178 | orchestrator | service-ks-register : glance | Creating/deleting endpoints -------------- 7.64s 2026-02-28 01:07:10.000194 | orchestrator | glance : Generating 'hostnqn' file for glance_api ----------------------- 7.26s 2026-02-28 01:07:10.000209 | orchestrator | glance : Copying over glance-image-import.conf -------------------------- 7.21s 2026-02-28 01:07:10.000223 | orchestrator | service-cert-copy : glance | Copying over extra CA certificates --------- 6.70s 2026-02-28 01:07:10.000238 | orchestrator | glance : Copying over config.json files for services -------------------- 6.64s 2026-02-28 01:07:10.000254 | orchestrator | glance : Creating TLS backend PEM File ---------------------------------- 5.56s 2026-02-28 01:07:10.000268 | orchestrator | glance : Ensuring glance service ceph config subdir exists -------------- 5.31s 2026-02-28 01:07:10.000284 | orchestrator | service-check-containers : glance | Check containers -------------------- 5.28s 2026-02-28 01:07:10.000298 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS certificate --- 5.09s 2026-02-28 01:07:10.000314 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS key ------ 4.81s 2026-02-28 01:07:10.000330 | orchestrator | glance : Copying over glance-haproxy-tls.cfg ---------------------------- 4.66s 2026-02-28 01:07:10.000345 | orchestrator | service-ks-register : glance | Creating/deleting services --------------- 4.49s 2026-02-28 01:07:10.000359 | orchestrator | glance : Copying over glance-cache.conf for glance_api ------------------ 4.44s 2026-02-28 01:07:10.000374 | orchestrator | service-ks-register : glance | Creating users --------------------------- 4.19s 2026-02-28 01:07:10.000389 | orchestrator | service-ks-register : glance | Granting/revoking user roles ------------- 3.91s 2026-02-28 01:07:10.000404 | orchestrator | 2026-02-28 01:07:09 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:07:10.000419 | orchestrator | 2026-02-28 01:07:09 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:07:13.026367 | orchestrator | 2026-02-28 01:07:13 | INFO  | Task c9bc6f62-00b4-4207-b064-516647d36890 is in state STARTED 2026-02-28 01:07:13.027580 | orchestrator | 2026-02-28 01:07:13 | INFO  | Task c15b47cf-1c0f-42de-b373-e0eb82491ed1 is in state STARTED 2026-02-28 01:07:13.028523 | orchestrator | 2026-02-28 01:07:13 | INFO  | Task 9b2f3619-d4ca-45c0-a7ab-ed503e9261db is in state STARTED 2026-02-28 01:07:13.029288 | orchestrator | 2026-02-28 01:07:13 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:07:13.029334 | orchestrator | 2026-02-28 01:07:13 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:07:16.075749 | orchestrator | 2026-02-28 01:07:16 | INFO  | Task c9bc6f62-00b4-4207-b064-516647d36890 is in state STARTED 2026-02-28 01:07:16.076892 | orchestrator | 2026-02-28 01:07:16 | INFO  | Task c15b47cf-1c0f-42de-b373-e0eb82491ed1 is in state STARTED 2026-02-28 01:07:16.078265 | orchestrator | 2026-02-28 01:07:16 | INFO  | Task 9b2f3619-d4ca-45c0-a7ab-ed503e9261db is in state STARTED 2026-02-28 01:07:16.080271 | orchestrator | 2026-02-28 01:07:16 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:07:16.080314 | orchestrator | 2026-02-28 01:07:16 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:07:19.126607 | orchestrator | 2026-02-28 01:07:19 | INFO  | Task c9bc6f62-00b4-4207-b064-516647d36890 is in state STARTED 2026-02-28 01:07:19.128538 | orchestrator | 2026-02-28 01:07:19 | INFO  | Task c15b47cf-1c0f-42de-b373-e0eb82491ed1 is in state STARTED 2026-02-28 01:07:19.130230 | orchestrator | 2026-02-28 01:07:19 | INFO  | Task 9b2f3619-d4ca-45c0-a7ab-ed503e9261db is in state STARTED 2026-02-28 01:07:19.131198 | orchestrator | 2026-02-28 01:07:19 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:07:19.131249 | orchestrator | 2026-02-28 01:07:19 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:07:22.174771 | orchestrator | 2026-02-28 01:07:22 | INFO  | Task c9bc6f62-00b4-4207-b064-516647d36890 is in state STARTED 2026-02-28 01:07:22.174895 | orchestrator | 2026-02-28 01:07:22 | INFO  | Task c15b47cf-1c0f-42de-b373-e0eb82491ed1 is in state STARTED 2026-02-28 01:07:22.175227 | orchestrator | 2026-02-28 01:07:22 | INFO  | Task 9b2f3619-d4ca-45c0-a7ab-ed503e9261db is in state STARTED 2026-02-28 01:07:22.176296 | orchestrator | 2026-02-28 01:07:22 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:07:22.176364 | orchestrator | 2026-02-28 01:07:22 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:07:25.214095 | orchestrator | 2026-02-28 01:07:25 | INFO  | Task c9bc6f62-00b4-4207-b064-516647d36890 is in state STARTED 2026-02-28 01:07:25.218256 | orchestrator | 2026-02-28 01:07:25 | INFO  | Task c15b47cf-1c0f-42de-b373-e0eb82491ed1 is in state STARTED 2026-02-28 01:07:25.219949 | orchestrator | 2026-02-28 01:07:25 | INFO  | Task 9b2f3619-d4ca-45c0-a7ab-ed503e9261db is in state STARTED 2026-02-28 01:07:25.221214 | orchestrator | 2026-02-28 01:07:25 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:07:25.221264 | orchestrator | 2026-02-28 01:07:25 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:07:28.249269 | orchestrator | 2026-02-28 01:07:28 | INFO  | Task c9bc6f62-00b4-4207-b064-516647d36890 is in state STARTED 2026-02-28 01:07:28.250203 | orchestrator | 2026-02-28 01:07:28 | INFO  | Task c15b47cf-1c0f-42de-b373-e0eb82491ed1 is in state STARTED 2026-02-28 01:07:28.252092 | orchestrator | 2026-02-28 01:07:28 | INFO  | Task 9b2f3619-d4ca-45c0-a7ab-ed503e9261db is in state STARTED 2026-02-28 01:07:28.252938 | orchestrator | 2026-02-28 01:07:28 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:07:28.252973 | orchestrator | 2026-02-28 01:07:28 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:07:31.291891 | orchestrator | 2026-02-28 01:07:31 | INFO  | Task c9bc6f62-00b4-4207-b064-516647d36890 is in state STARTED 2026-02-28 01:07:31.292203 | orchestrator | 2026-02-28 01:07:31 | INFO  | Task c15b47cf-1c0f-42de-b373-e0eb82491ed1 is in state STARTED 2026-02-28 01:07:31.292983 | orchestrator | 2026-02-28 01:07:31 | INFO  | Task 9b2f3619-d4ca-45c0-a7ab-ed503e9261db is in state STARTED 2026-02-28 01:07:31.294370 | orchestrator | 2026-02-28 01:07:31 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:07:31.294424 | orchestrator | 2026-02-28 01:07:31 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:07:34.327113 | orchestrator | 2026-02-28 01:07:34 | INFO  | Task c9bc6f62-00b4-4207-b064-516647d36890 is in state STARTED 2026-02-28 01:07:34.328011 | orchestrator | 2026-02-28 01:07:34 | INFO  | Task c15b47cf-1c0f-42de-b373-e0eb82491ed1 is in state STARTED 2026-02-28 01:07:34.329357 | orchestrator | 2026-02-28 01:07:34 | INFO  | Task 9b2f3619-d4ca-45c0-a7ab-ed503e9261db is in state STARTED 2026-02-28 01:07:34.330813 | orchestrator | 2026-02-28 01:07:34 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:07:34.331059 | orchestrator | 2026-02-28 01:07:34 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:07:37.370339 | orchestrator | 2026-02-28 01:07:37 | INFO  | Task c9bc6f62-00b4-4207-b064-516647d36890 is in state SUCCESS 2026-02-28 01:07:37.371671 | orchestrator | 2026-02-28 01:07:37.371759 | orchestrator | 2026-02-28 01:07:37.371790 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-28 01:07:37.371801 | orchestrator | 2026-02-28 01:07:37.371820 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-28 01:07:37.371841 | orchestrator | Saturday 28 February 2026 01:03:58 +0000 (0:00:00.271) 0:00:00.271 ***** 2026-02-28 01:07:37.371858 | orchestrator | ok: [testbed-node-0] 2026-02-28 01:07:37.371869 | orchestrator | ok: [testbed-node-1] 2026-02-28 01:07:37.371877 | orchestrator | ok: [testbed-node-2] 2026-02-28 01:07:37.371886 | orchestrator | 2026-02-28 01:07:37.371894 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-28 01:07:37.371903 | orchestrator | Saturday 28 February 2026 01:03:58 +0000 (0:00:00.363) 0:00:00.634 ***** 2026-02-28 01:07:37.371911 | orchestrator | ok: [testbed-node-0] => (item=enable_cinder_True) 2026-02-28 01:07:37.371920 | orchestrator | ok: [testbed-node-1] => (item=enable_cinder_True) 2026-02-28 01:07:37.371939 | orchestrator | ok: [testbed-node-2] => (item=enable_cinder_True) 2026-02-28 01:07:37.371947 | orchestrator | 2026-02-28 01:07:37.371956 | orchestrator | PLAY [Apply role cinder] ******************************************************* 2026-02-28 01:07:37.371965 | orchestrator | 2026-02-28 01:07:37.371974 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-02-28 01:07:37.371982 | orchestrator | Saturday 28 February 2026 01:03:58 +0000 (0:00:00.528) 0:00:01.163 ***** 2026-02-28 01:07:37.372005 | orchestrator | included: /ansible/roles/cinder/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 01:07:37.372015 | orchestrator | 2026-02-28 01:07:37.372023 | orchestrator | TASK [service-ks-register : cinder | Creating/deleting services] *************** 2026-02-28 01:07:37.372032 | orchestrator | Saturday 28 February 2026 01:03:59 +0000 (0:00:00.676) 0:00:01.840 ***** 2026-02-28 01:07:37.372041 | orchestrator | changed: [testbed-node-0] => (item=cinder (block-storage)) 2026-02-28 01:07:37.372059 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 (volumev3)) 2026-02-28 01:07:37.372068 | orchestrator | 2026-02-28 01:07:37.372077 | orchestrator | TASK [service-ks-register : cinder | Creating/deleting endpoints] ************** 2026-02-28 01:07:37.372085 | orchestrator | Saturday 28 February 2026 01:04:06 +0000 (0:00:06.591) 0:00:08.432 ***** 2026-02-28 01:07:37.372093 | orchestrator | changed: [testbed-node-0] => (item=cinder -> https://api-int.testbed.osism.xyz:8776/v3 -> internal) 2026-02-28 01:07:37.372102 | orchestrator | changed: [testbed-node-0] => (item=cinder -> https://api.testbed.osism.xyz:8776/v3 -> public) 2026-02-28 01:07:37.372110 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s -> internal) 2026-02-28 01:07:37.372119 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s -> public) 2026-02-28 01:07:37.372128 | orchestrator | 2026-02-28 01:07:37.372136 | orchestrator | TASK [service-ks-register : cinder | Creating projects] ************************ 2026-02-28 01:07:37.372145 | orchestrator | Saturday 28 February 2026 01:04:19 +0000 (0:00:13.448) 0:00:21.881 ***** 2026-02-28 01:07:37.372153 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-28 01:07:37.372181 | orchestrator | 2026-02-28 01:07:37.372190 | orchestrator | TASK [service-ks-register : cinder | Creating users] *************************** 2026-02-28 01:07:37.372198 | orchestrator | Saturday 28 February 2026 01:04:23 +0000 (0:00:03.715) 0:00:25.596 ***** 2026-02-28 01:07:37.372207 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service) 2026-02-28 01:07:37.372216 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-28 01:07:37.372224 | orchestrator | 2026-02-28 01:07:37.372233 | orchestrator | TASK [service-ks-register : cinder | Creating roles] *************************** 2026-02-28 01:07:37.372241 | orchestrator | Saturday 28 February 2026 01:04:27 +0000 (0:00:04.190) 0:00:29.786 ***** 2026-02-28 01:07:37.372250 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-28 01:07:37.372259 | orchestrator | 2026-02-28 01:07:37.372268 | orchestrator | TASK [service-ks-register : cinder | Granting/revoking user roles] ************* 2026-02-28 01:07:37.372661 | orchestrator | Saturday 28 February 2026 01:04:31 +0000 (0:00:03.896) 0:00:33.683 ***** 2026-02-28 01:07:37.372670 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> admin) 2026-02-28 01:07:37.372675 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> service) 2026-02-28 01:07:37.372681 | orchestrator | 2026-02-28 01:07:37.372687 | orchestrator | TASK [cinder : Ensuring config directories exist] ****************************** 2026-02-28 01:07:37.372715 | orchestrator | Saturday 28 February 2026 01:04:40 +0000 (0:00:08.611) 0:00:42.295 ***** 2026-02-28 01:07:37.372755 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-28 01:07:37.372772 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-28 01:07:37.372780 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-28 01:07:37.372796 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-28 01:07:37.372803 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-28 01:07:37.372808 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-28 01:07:37.372832 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-28 01:07:37.372844 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-28 01:07:37.372850 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-28 01:07:37.372862 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-28 01:07:37.372868 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-28 01:07:37.372874 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-28 01:07:37.372880 | orchestrator | 2026-02-28 01:07:37.372900 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-02-28 01:07:37.372907 | orchestrator | Saturday 28 February 2026 01:04:43 +0000 (0:00:03.321) 0:00:45.617 ***** 2026-02-28 01:07:37.372913 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:07:37.372919 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:07:37.372924 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:07:37.372930 | orchestrator | 2026-02-28 01:07:37.372935 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-02-28 01:07:37.372941 | orchestrator | Saturday 28 February 2026 01:04:44 +0000 (0:00:00.603) 0:00:46.220 ***** 2026-02-28 01:07:37.372946 | orchestrator | included: /ansible/roles/cinder/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 01:07:37.372952 | orchestrator | 2026-02-28 01:07:37.372958 | orchestrator | TASK [cinder : Ensuring cinder service ceph config subdirs exists] ************* 2026-02-28 01:07:37.372963 | orchestrator | Saturday 28 February 2026 01:04:45 +0000 (0:00:01.106) 0:00:47.327 ***** 2026-02-28 01:07:37.372969 | orchestrator | changed: [testbed-node-0] => (item=cinder-volume) 2026-02-28 01:07:37.372975 | orchestrator | changed: [testbed-node-2] => (item=cinder-volume) 2026-02-28 01:07:37.372980 | orchestrator | changed: [testbed-node-1] => (item=cinder-volume) 2026-02-28 01:07:37.372990 | orchestrator | changed: [testbed-node-0] => (item=cinder-backup) 2026-02-28 01:07:37.373008 | orchestrator | changed: [testbed-node-1] => (item=cinder-backup) 2026-02-28 01:07:37.373017 | orchestrator | changed: [testbed-node-2] => (item=cinder-backup) 2026-02-28 01:07:37.373023 | orchestrator | 2026-02-28 01:07:37.373028 | orchestrator | TASK [cinder : Copying over multiple ceph.conf for cinder services] ************ 2026-02-28 01:07:37.373034 | orchestrator | Saturday 28 February 2026 01:04:47 +0000 (0:00:02.554) 0:00:49.881 ***** 2026-02-28 01:07:37.373040 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}])  2026-02-28 01:07:37.373046 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}])  2026-02-28 01:07:37.373069 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}])  2026-02-28 01:07:37.373076 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}])  2026-02-28 01:07:37.373091 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}])  2026-02-28 01:07:37.373098 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}])  2026-02-28 01:07:37.373104 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}])  2026-02-28 01:07:37.373126 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}])  2026-02-28 01:07:37.373139 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}])  2026-02-28 01:07:37.373146 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}])  2026-02-28 01:07:37.373152 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}])  2026-02-28 01:07:37.373191 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}])  2026-02-28 01:07:37.373203 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}]) 2026-02-28 01:07:37.373219 | orchestrator | changed: [testbed-node-1] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}]) 2026-02-28 01:07:37.373225 | orchestrator | changed: [testbed-node-2] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}]) 2026-02-28 01:07:37.373231 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}]) 2026-02-28 01:07:37.373256 | orchestrator | ok: [testbed-node-1] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}]) 2026-02-28 01:07:37.373270 | orchestrator | ok: [testbed-node-2] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}]) 2026-02-28 01:07:37.373278 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}]) 2026-02-28 01:07:37.373286 | orchestrator | changed: [testbed-node-1] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}]) 2026-02-28 01:07:37.373292 | orchestrator | changed: [testbed-node-2] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}]) 2026-02-28 01:07:37.373315 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}]) 2026-02-28 01:07:37.373329 | orchestrator | ok: [testbed-node-1] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}]) 2026-02-28 01:07:37.373336 | orchestrator | ok: [testbed-node-2] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}]) 2026-02-28 01:07:37.373343 | orchestrator | 2026-02-28 01:07:37.373349 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-volume] ***************** 2026-02-28 01:07:37.373356 | orchestrator | Saturday 28 February 2026 01:04:56 +0000 (0:00:08.372) 0:00:58.254 ***** 2026-02-28 01:07:37.373363 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}) 2026-02-28 01:07:37.373371 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}) 2026-02-28 01:07:37.373378 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}) 2026-02-28 01:07:37.373384 | orchestrator | 2026-02-28 01:07:37.373391 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-backup] ***************** 2026-02-28 01:07:37.373398 | orchestrator | Saturday 28 February 2026 01:04:59 +0000 (0:00:03.082) 0:01:01.336 ***** 2026-02-28 01:07:37.373404 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}) 2026-02-28 01:07:37.373411 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}) 2026-02-28 01:07:37.373417 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}) 2026-02-28 01:07:37.373424 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}) 2026-02-28 01:07:37.373430 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}) 2026-02-28 01:07:37.373437 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}) 2026-02-28 01:07:37.373449 | orchestrator | 2026-02-28 01:07:37.373456 | orchestrator | TASK [cinder : Ensuring config directory has correct owner and permission] ***** 2026-02-28 01:07:37.373462 | orchestrator | Saturday 28 February 2026 01:05:02 +0000 (0:00:03.145) 0:01:04.482 ***** 2026-02-28 01:07:37.373469 | orchestrator | ok: [testbed-node-0] => (item=cinder-volume) 2026-02-28 01:07:37.373475 | orchestrator | ok: [testbed-node-1] => (item=cinder-volume) 2026-02-28 01:07:37.373481 | orchestrator | ok: [testbed-node-2] => (item=cinder-volume) 2026-02-28 01:07:37.373504 | orchestrator | ok: [testbed-node-0] => (item=cinder-backup) 2026-02-28 01:07:37.373511 | orchestrator | ok: [testbed-node-1] => (item=cinder-backup) 2026-02-28 01:07:37.373518 | orchestrator | ok: [testbed-node-2] => (item=cinder-backup) 2026-02-28 01:07:37.373524 | orchestrator | 2026-02-28 01:07:37.373530 | orchestrator | TASK [cinder : Check if policies shall be overwritten] ************************* 2026-02-28 01:07:37.373537 | orchestrator | Saturday 28 February 2026 01:05:03 +0000 (0:00:01.037) 0:01:05.519 ***** 2026-02-28 01:07:37.373543 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:07:37.373549 | orchestrator | 2026-02-28 01:07:37.373556 | orchestrator | TASK [cinder : Set cinder policy file] ***************************************** 2026-02-28 01:07:37.373562 | orchestrator | Saturday 28 February 2026 01:05:03 +0000 (0:00:00.129) 0:01:05.649 ***** 2026-02-28 01:07:37.373568 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:07:37.373575 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:07:37.373581 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:07:37.373588 | orchestrator | 2026-02-28 01:07:37.373594 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-02-28 01:07:37.373601 | orchestrator | Saturday 28 February 2026 01:05:03 +0000 (0:00:00.339) 0:01:05.989 ***** 2026-02-28 01:07:37.373607 | orchestrator | included: /ansible/roles/cinder/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 01:07:37.373613 | orchestrator | 2026-02-28 01:07:37.373620 | orchestrator | TASK [service-cert-copy : cinder | Copying over extra CA certificates] ********* 2026-02-28 01:07:37.373630 | orchestrator | Saturday 28 February 2026 01:05:05 +0000 (0:00:01.322) 0:01:07.311 ***** 2026-02-28 01:07:37.373638 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-28 01:07:37.373645 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-28 01:07:37.373673 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-28 01:07:37.373680 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-28 01:07:37.373689 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-28 01:07:37.373711 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-28 01:07:37.373717 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-28 01:07:37.373723 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-28 01:07:37.373733 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-28 01:07:37.373756 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-28 01:07:37.373766 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-28 01:07:37.373772 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-28 01:07:37.373778 | orchestrator | 2026-02-28 01:07:37.373784 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS certificate] *** 2026-02-28 01:07:37.373789 | orchestrator | Saturday 28 February 2026 01:05:10 +0000 (0:00:05.690) 0:01:13.001 ***** 2026-02-28 01:07:37.373795 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-28 01:07:37.373807 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-28 01:07:37.373829 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-28 01:07:37.373839 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-28 01:07:37.373845 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:07:37.373851 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-28 01:07:37.373857 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-28 01:07:37.373868 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-28 01:07:37.373889 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-28 01:07:37.373896 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:07:37.373905 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-28 01:07:37.373911 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-28 01:07:37.373917 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-28 01:07:37.373929 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-28 01:07:37.373934 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:07:37.373940 | orchestrator | 2026-02-28 01:07:37.373946 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS key] ****** 2026-02-28 01:07:37.373951 | orchestrator | Saturday 28 February 2026 01:05:13 +0000 (0:00:02.293) 0:01:15.295 ***** 2026-02-28 01:07:37.373961 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-28 01:07:37.373968 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-28 01:07:37.373977 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-28 01:07:37.373983 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-28 01:07:37.373992 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:07:37.373998 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-28 01:07:37.374005 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-28 01:07:37.374050 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-28 01:07:37.374061 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-28 01:07:37.374067 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:07:37.374073 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-28 01:07:37.374083 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-28 01:07:37.374089 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-28 01:07:37.374109 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-28 01:07:37.374115 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:07:37.374121 | orchestrator | 2026-02-28 01:07:37.374126 | orchestrator | TASK [cinder : Copying over config.json files for services] ******************** 2026-02-28 01:07:37.374132 | orchestrator | Saturday 28 February 2026 01:05:16 +0000 (0:00:03.427) 0:01:18.722 ***** 2026-02-28 01:07:37.374141 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-28 01:07:37.374152 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-28 01:07:37.374158 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-28 01:07:37.374167 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-28 01:07:37.374173 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-28 01:07:37.374182 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-28 01:07:37.374193 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-28 01:07:37.374199 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-28 01:07:37.374204 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-28 01:07:37.374213 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-28 01:07:37.374219 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-28 01:07:37.374228 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-28 01:07:37.374238 | orchestrator | 2026-02-28 01:07:37.374243 | orchestrator | TASK [cinder : Copying over cinder-wsgi.conf] ********************************** 2026-02-28 01:07:37.374249 | orchestrator | Saturday 28 February 2026 01:05:21 +0000 (0:00:05.021) 0:01:23.744 ***** 2026-02-28 01:07:37.374255 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2026-02-28 01:07:37.374260 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:07:37.374266 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2026-02-28 01:07:37.374319 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:07:37.374325 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2026-02-28 01:07:37.374330 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:07:37.374336 | orchestrator | 2026-02-28 01:07:37.374342 | orchestrator | TASK [Configure uWSGI for Cinder] ********************************************** 2026-02-28 01:07:37.374347 | orchestrator | Saturday 28 February 2026 01:05:22 +0000 (0:00:01.336) 0:01:25.081 ***** 2026-02-28 01:07:37.374353 | orchestrator | included: service-uwsgi-config for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 01:07:37.374358 | orchestrator | 2026-02-28 01:07:37.374363 | orchestrator | TASK [service-uwsgi-config : Copying over cinder-api uWSGI config] ************* 2026-02-28 01:07:37.374369 | orchestrator | Saturday 28 February 2026 01:05:24 +0000 (0:00:01.784) 0:01:26.865 ***** 2026-02-28 01:07:37.374374 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:07:37.374380 | orchestrator | changed: [testbed-node-2] 2026-02-28 01:07:37.374385 | orchestrator | changed: [testbed-node-1] 2026-02-28 01:07:37.374391 | orchestrator | 2026-02-28 01:07:37.374396 | orchestrator | TASK [cinder : Copying over cinder.conf] *************************************** 2026-02-28 01:07:37.374402 | orchestrator | Saturday 28 February 2026 01:05:27 +0000 (0:00:03.173) 0:01:30.039 ***** 2026-02-28 01:07:37.374408 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-28 01:07:37.374419 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-28 01:07:37.374434 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-28 01:07:37.374440 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-28 01:07:37.374446 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-28 01:07:37.374452 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-28 01:07:37.374461 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-28 01:07:37.374467 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-28 01:07:37.374479 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-28 01:07:37.374485 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-28 01:07:37.374491 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-28 01:07:37.374497 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-28 01:07:37.374502 | orchestrator | 2026-02-28 01:07:37.374508 | orchestrator | TASK [cinder : Generating 'hostnqn' file for cinder_volume] ******************** 2026-02-28 01:07:37.374514 | orchestrator | Saturday 28 February 2026 01:05:45 +0000 (0:00:17.358) 0:01:47.398 ***** 2026-02-28 01:07:37.374519 | orchestrator | changed: [testbed-node-1] 2026-02-28 01:07:37.374525 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:07:37.374531 | orchestrator | changed: [testbed-node-2] 2026-02-28 01:07:37.374536 | orchestrator | 2026-02-28 01:07:37.374542 | orchestrator | TASK [cinder : Copying over existing policy file] ****************************** 2026-02-28 01:07:37.374553 | orchestrator | Saturday 28 February 2026 01:05:47 +0000 (0:00:02.455) 0:01:49.854 ***** 2026-02-28 01:07:37.374563 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-28 01:07:37.374572 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-28 01:07:37.374578 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-28 01:07:37.374584 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-28 01:07:37.374590 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:07:37.374596 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-28 01:07:37.374616 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-28 01:07:37.374629 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-28 01:07:37.374639 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-28 01:07:37.374648 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:07:37.374657 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-28 01:07:37.374666 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-28 01:07:37.374688 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-28 01:07:37.374812 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-28 01:07:37.374830 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:07:37.374836 | orchestrator | 2026-02-28 01:07:37.374848 | orchestrator | TASK [cinder : Copying over nfs_shares files for cinder_volume] **************** 2026-02-28 01:07:37.374854 | orchestrator | Saturday 28 February 2026 01:05:48 +0000 (0:00:00.811) 0:01:50.666 ***** 2026-02-28 01:07:37.374860 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:07:37.374865 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:07:37.374871 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:07:37.374876 | orchestrator | 2026-02-28 01:07:37.374881 | orchestrator | TASK [service-check-containers : cinder | Check containers] ******************** 2026-02-28 01:07:37.374887 | orchestrator | Saturday 28 February 2026 01:05:48 +0000 (0:00:00.408) 0:01:51.074 ***** 2026-02-28 01:07:37.374893 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-28 01:07:37.374900 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-28 01:07:37.374919 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-28 01:07:37.374926 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-28 01:07:37.374935 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-28 01:07:37.374941 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-28 01:07:37.374947 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-28 01:07:37.374953 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-28 01:07:37.374969 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-28 01:07:37.374979 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-28 01:07:37.374989 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-28 01:07:37.374996 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-28 01:07:37.375003 | orchestrator | 2026-02-28 01:07:37.375009 | orchestrator | TASK [service-check-containers : cinder | Notify handlers to restart containers] *** 2026-02-28 01:07:37.375016 | orchestrator | Saturday 28 February 2026 01:05:52 +0000 (0:00:03.473) 0:01:54.548 ***** 2026-02-28 01:07:37.375022 | orchestrator | changed: [testbed-node-0] => { 2026-02-28 01:07:37.375028 | orchestrator |  "msg": "Notifying handlers" 2026-02-28 01:07:37.375035 | orchestrator | } 2026-02-28 01:07:37.375041 | orchestrator | changed: [testbed-node-1] => { 2026-02-28 01:07:37.375048 | orchestrator |  "msg": "Notifying handlers" 2026-02-28 01:07:37.375058 | orchestrator | } 2026-02-28 01:07:37.375064 | orchestrator | changed: [testbed-node-2] => { 2026-02-28 01:07:37.375070 | orchestrator |  "msg": "Notifying handlers" 2026-02-28 01:07:37.375077 | orchestrator | } 2026-02-28 01:07:37.375083 | orchestrator | 2026-02-28 01:07:37.375089 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-28 01:07:37.375095 | orchestrator | Saturday 28 February 2026 01:05:53 +0000 (0:00:00.680) 0:01:55.229 ***** 2026-02-28 01:07:37.375102 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-28 01:07:37.375113 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-28 01:07:37.375123 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-28 01:07:37.375130 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-28 01:07:37.375136 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:07:37.375143 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-28 01:07:37.375153 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-28 01:07:37.375162 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-28 01:07:37.375168 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-28 01:07:37.375174 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:07:37.375183 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-28 01:07:37.375189 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-28 01:07:37.375199 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-28 01:07:37.375205 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-28 01:07:37.375211 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:07:37.375217 | orchestrator | 2026-02-28 01:07:37.375226 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-02-28 01:07:37.375232 | orchestrator | Saturday 28 February 2026 01:05:53 +0000 (0:00:00.884) 0:01:56.114 ***** 2026-02-28 01:07:37.375237 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:07:37.375243 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:07:37.375250 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:07:37.375258 | orchestrator | 2026-02-28 01:07:37.375265 | orchestrator | TASK [cinder : Creating Cinder database] *************************************** 2026-02-28 01:07:37.375276 | orchestrator | Saturday 28 February 2026 01:05:54 +0000 (0:00:00.348) 0:01:56.462 ***** 2026-02-28 01:07:37.375286 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:07:37.375296 | orchestrator | 2026-02-28 01:07:37.375303 | orchestrator | TASK [cinder : Creating Cinder database user and setting permissions] ********** 2026-02-28 01:07:37.375311 | orchestrator | Saturday 28 February 2026 01:05:56 +0000 (0:00:02.356) 0:01:58.819 ***** 2026-02-28 01:07:37.375318 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:07:37.375327 | orchestrator | 2026-02-28 01:07:37.375334 | orchestrator | TASK [cinder : Running Cinder bootstrap container] ***************************** 2026-02-28 01:07:37.375342 | orchestrator | Saturday 28 February 2026 01:05:59 +0000 (0:00:03.017) 0:02:01.836 ***** 2026-02-28 01:07:37.375350 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:07:37.375357 | orchestrator | 2026-02-28 01:07:37.375364 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-02-28 01:07:37.375371 | orchestrator | Saturday 28 February 2026 01:06:18 +0000 (0:00:19.214) 0:02:21.051 ***** 2026-02-28 01:07:37.375378 | orchestrator | 2026-02-28 01:07:37.375392 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-02-28 01:07:37.375400 | orchestrator | Saturday 28 February 2026 01:06:18 +0000 (0:00:00.089) 0:02:21.141 ***** 2026-02-28 01:07:37.375407 | orchestrator | 2026-02-28 01:07:37.375414 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-02-28 01:07:37.375421 | orchestrator | Saturday 28 February 2026 01:06:19 +0000 (0:00:00.099) 0:02:21.240 ***** 2026-02-28 01:07:37.375434 | orchestrator | 2026-02-28 01:07:37.375441 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-api container] ************************ 2026-02-28 01:07:37.375448 | orchestrator | Saturday 28 February 2026 01:06:19 +0000 (0:00:00.085) 0:02:21.326 ***** 2026-02-28 01:07:37.375455 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:07:37.375463 | orchestrator | changed: [testbed-node-1] 2026-02-28 01:07:37.375471 | orchestrator | changed: [testbed-node-2] 2026-02-28 01:07:37.375478 | orchestrator | 2026-02-28 01:07:37.375485 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-scheduler container] ****************** 2026-02-28 01:07:37.375493 | orchestrator | Saturday 28 February 2026 01:06:50 +0000 (0:00:31.400) 0:02:52.726 ***** 2026-02-28 01:07:37.375500 | orchestrator | changed: [testbed-node-2] 2026-02-28 01:07:37.375508 | orchestrator | changed: [testbed-node-1] 2026-02-28 01:07:37.375516 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:07:37.375523 | orchestrator | 2026-02-28 01:07:37.375530 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-volume container] ********************* 2026-02-28 01:07:37.375537 | orchestrator | Saturday 28 February 2026 01:06:59 +0000 (0:00:09.208) 0:03:01.934 ***** 2026-02-28 01:07:37.375545 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:07:37.375552 | orchestrator | changed: [testbed-node-1] 2026-02-28 01:07:37.375559 | orchestrator | changed: [testbed-node-2] 2026-02-28 01:07:37.375567 | orchestrator | 2026-02-28 01:07:37.375574 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-backup container] ********************* 2026-02-28 01:07:37.375581 | orchestrator | Saturday 28 February 2026 01:07:26 +0000 (0:00:26.600) 0:03:28.535 ***** 2026-02-28 01:07:37.375588 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:07:37.375595 | orchestrator | changed: [testbed-node-1] 2026-02-28 01:07:37.375602 | orchestrator | changed: [testbed-node-2] 2026-02-28 01:07:37.375610 | orchestrator | 2026-02-28 01:07:37.375617 | orchestrator | RUNNING HANDLER [cinder : Wait for cinder services to update service versions] *** 2026-02-28 01:07:37.375624 | orchestrator | Saturday 28 February 2026 01:07:34 +0000 (0:00:07.670) 0:03:36.205 ***** 2026-02-28 01:07:37.375631 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:07:37.375638 | orchestrator | 2026-02-28 01:07:37.375644 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-28 01:07:37.375653 | orchestrator | testbed-node-0 : ok=32  changed=23  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-02-28 01:07:37.375661 | orchestrator | testbed-node-1 : ok=23  changed=16  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-02-28 01:07:37.375668 | orchestrator | testbed-node-2 : ok=23  changed=16  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-02-28 01:07:37.375675 | orchestrator | 2026-02-28 01:07:37.375682 | orchestrator | 2026-02-28 01:07:37.375689 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-28 01:07:37.375725 | orchestrator | Saturday 28 February 2026 01:07:34 +0000 (0:00:00.372) 0:03:36.577 ***** 2026-02-28 01:07:37.375733 | orchestrator | =============================================================================== 2026-02-28 01:07:37.375741 | orchestrator | cinder : Restart cinder-api container ---------------------------------- 31.40s 2026-02-28 01:07:37.375748 | orchestrator | cinder : Restart cinder-volume container ------------------------------- 26.60s 2026-02-28 01:07:37.375755 | orchestrator | cinder : Running Cinder bootstrap container ---------------------------- 19.21s 2026-02-28 01:07:37.375762 | orchestrator | cinder : Copying over cinder.conf -------------------------------------- 17.36s 2026-02-28 01:07:37.375770 | orchestrator | service-ks-register : cinder | Creating/deleting endpoints ------------- 13.45s 2026-02-28 01:07:37.375777 | orchestrator | cinder : Restart cinder-scheduler container ----------------------------- 9.21s 2026-02-28 01:07:37.375785 | orchestrator | service-ks-register : cinder | Granting/revoking user roles ------------- 8.61s 2026-02-28 01:07:37.375793 | orchestrator | cinder : Copying over multiple ceph.conf for cinder services ------------ 8.37s 2026-02-28 01:07:37.375816 | orchestrator | cinder : Restart cinder-backup container -------------------------------- 7.67s 2026-02-28 01:07:37.375824 | orchestrator | service-ks-register : cinder | Creating/deleting services --------------- 6.59s 2026-02-28 01:07:37.375831 | orchestrator | service-cert-copy : cinder | Copying over extra CA certificates --------- 5.69s 2026-02-28 01:07:37.375839 | orchestrator | cinder : Copying over config.json files for services -------------------- 5.02s 2026-02-28 01:07:37.375847 | orchestrator | service-ks-register : cinder | Creating users --------------------------- 4.19s 2026-02-28 01:07:37.375854 | orchestrator | service-ks-register : cinder | Creating roles --------------------------- 3.90s 2026-02-28 01:07:37.375861 | orchestrator | service-ks-register : cinder | Creating projects ------------------------ 3.71s 2026-02-28 01:07:37.375868 | orchestrator | service-check-containers : cinder | Check containers -------------------- 3.47s 2026-02-28 01:07:37.375875 | orchestrator | service-cert-copy : cinder | Copying over backend internal TLS key ------ 3.43s 2026-02-28 01:07:37.375883 | orchestrator | cinder : Ensuring config directories exist ------------------------------ 3.32s 2026-02-28 01:07:37.375890 | orchestrator | service-uwsgi-config : Copying over cinder-api uWSGI config ------------- 3.17s 2026-02-28 01:07:37.375896 | orchestrator | cinder : Copy over Ceph keyring files for cinder-backup ----------------- 3.15s 2026-02-28 01:07:37.375904 | orchestrator | 2026-02-28 01:07:37 | INFO  | Task c15b47cf-1c0f-42de-b373-e0eb82491ed1 is in state STARTED 2026-02-28 01:07:37.375919 | orchestrator | 2026-02-28 01:07:37 | INFO  | Task 9b2f3619-d4ca-45c0-a7ab-ed503e9261db is in state STARTED 2026-02-28 01:07:37.376291 | orchestrator | 2026-02-28 01:07:37 | INFO  | Task 61b9db59-9efd-4e94-8bf1-8fbb3dd0b0f6 is in state STARTED 2026-02-28 01:07:37.378182 | orchestrator | 2026-02-28 01:07:37 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:07:37.378210 | orchestrator | 2026-02-28 01:07:37 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:07:40.435364 | orchestrator | 2026-02-28 01:07:40 | INFO  | Task c15b47cf-1c0f-42de-b373-e0eb82491ed1 is in state STARTED 2026-02-28 01:07:40.435469 | orchestrator | 2026-02-28 01:07:40 | INFO  | Task 9b2f3619-d4ca-45c0-a7ab-ed503e9261db is in state STARTED 2026-02-28 01:07:40.447264 | orchestrator | 2026-02-28 01:07:40 | INFO  | Task 61b9db59-9efd-4e94-8bf1-8fbb3dd0b0f6 is in state STARTED 2026-02-28 01:07:40.447332 | orchestrator | 2026-02-28 01:07:40 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:07:40.447343 | orchestrator | 2026-02-28 01:07:40 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:07:43.474425 | orchestrator | 2026-02-28 01:07:43 | INFO  | Task c15b47cf-1c0f-42de-b373-e0eb82491ed1 is in state STARTED 2026-02-28 01:07:43.475960 | orchestrator | 2026-02-28 01:07:43 | INFO  | Task 9b2f3619-d4ca-45c0-a7ab-ed503e9261db is in state STARTED 2026-02-28 01:07:43.477172 | orchestrator | 2026-02-28 01:07:43 | INFO  | Task 61b9db59-9efd-4e94-8bf1-8fbb3dd0b0f6 is in state STARTED 2026-02-28 01:07:43.479891 | orchestrator | 2026-02-28 01:07:43 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:07:43.479953 | orchestrator | 2026-02-28 01:07:43 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:07:46.524134 | orchestrator | 2026-02-28 01:07:46 | INFO  | Task c15b47cf-1c0f-42de-b373-e0eb82491ed1 is in state STARTED 2026-02-28 01:07:46.524889 | orchestrator | 2026-02-28 01:07:46 | INFO  | Task 9b2f3619-d4ca-45c0-a7ab-ed503e9261db is in state STARTED 2026-02-28 01:07:46.525860 | orchestrator | 2026-02-28 01:07:46 | INFO  | Task 61b9db59-9efd-4e94-8bf1-8fbb3dd0b0f6 is in state STARTED 2026-02-28 01:07:46.526225 | orchestrator | 2026-02-28 01:07:46 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:07:46.526261 | orchestrator | 2026-02-28 01:07:46 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:07:49.555597 | orchestrator | 2026-02-28 01:07:49 | INFO  | Task c15b47cf-1c0f-42de-b373-e0eb82491ed1 is in state STARTED 2026-02-28 01:07:49.557397 | orchestrator | 2026-02-28 01:07:49 | INFO  | Task 9b2f3619-d4ca-45c0-a7ab-ed503e9261db is in state STARTED 2026-02-28 01:07:49.559001 | orchestrator | 2026-02-28 01:07:49 | INFO  | Task 61b9db59-9efd-4e94-8bf1-8fbb3dd0b0f6 is in state STARTED 2026-02-28 01:07:49.561079 | orchestrator | 2026-02-28 01:07:49 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:07:49.561140 | orchestrator | 2026-02-28 01:07:49 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:07:52.591205 | orchestrator | 2026-02-28 01:07:52 | INFO  | Task c15b47cf-1c0f-42de-b373-e0eb82491ed1 is in state STARTED 2026-02-28 01:07:52.591689 | orchestrator | 2026-02-28 01:07:52 | INFO  | Task 9b2f3619-d4ca-45c0-a7ab-ed503e9261db is in state STARTED 2026-02-28 01:07:52.593336 | orchestrator | 2026-02-28 01:07:52 | INFO  | Task 61b9db59-9efd-4e94-8bf1-8fbb3dd0b0f6 is in state STARTED 2026-02-28 01:07:52.594164 | orchestrator | 2026-02-28 01:07:52 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:07:52.594188 | orchestrator | 2026-02-28 01:07:52 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:07:55.633211 | orchestrator | 2026-02-28 01:07:55 | INFO  | Task c15b47cf-1c0f-42de-b373-e0eb82491ed1 is in state STARTED 2026-02-28 01:07:55.634601 | orchestrator | 2026-02-28 01:07:55 | INFO  | Task 9b2f3619-d4ca-45c0-a7ab-ed503e9261db is in state STARTED 2026-02-28 01:07:55.635876 | orchestrator | 2026-02-28 01:07:55 | INFO  | Task 61b9db59-9efd-4e94-8bf1-8fbb3dd0b0f6 is in state STARTED 2026-02-28 01:07:55.636633 | orchestrator | 2026-02-28 01:07:55 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:07:55.636817 | orchestrator | 2026-02-28 01:07:55 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:07:58.671145 | orchestrator | 2026-02-28 01:07:58 | INFO  | Task c15b47cf-1c0f-42de-b373-e0eb82491ed1 is in state STARTED 2026-02-28 01:07:58.671733 | orchestrator | 2026-02-28 01:07:58 | INFO  | Task 9b2f3619-d4ca-45c0-a7ab-ed503e9261db is in state STARTED 2026-02-28 01:07:58.672573 | orchestrator | 2026-02-28 01:07:58 | INFO  | Task 61b9db59-9efd-4e94-8bf1-8fbb3dd0b0f6 is in state STARTED 2026-02-28 01:07:58.673592 | orchestrator | 2026-02-28 01:07:58 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:07:58.673621 | orchestrator | 2026-02-28 01:07:58 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:08:01.706619 | orchestrator | 2026-02-28 01:08:01 | INFO  | Task c15b47cf-1c0f-42de-b373-e0eb82491ed1 is in state STARTED 2026-02-28 01:08:01.707645 | orchestrator | 2026-02-28 01:08:01 | INFO  | Task 9b2f3619-d4ca-45c0-a7ab-ed503e9261db is in state STARTED 2026-02-28 01:08:01.708901 | orchestrator | 2026-02-28 01:08:01 | INFO  | Task 61b9db59-9efd-4e94-8bf1-8fbb3dd0b0f6 is in state STARTED 2026-02-28 01:08:01.710396 | orchestrator | 2026-02-28 01:08:01 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:08:01.710578 | orchestrator | 2026-02-28 01:08:01 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:08:04.747175 | orchestrator | 2026-02-28 01:08:04 | INFO  | Task c15b47cf-1c0f-42de-b373-e0eb82491ed1 is in state STARTED 2026-02-28 01:08:04.749164 | orchestrator | 2026-02-28 01:08:04 | INFO  | Task 9b2f3619-d4ca-45c0-a7ab-ed503e9261db is in state STARTED 2026-02-28 01:08:04.750442 | orchestrator | 2026-02-28 01:08:04 | INFO  | Task 61b9db59-9efd-4e94-8bf1-8fbb3dd0b0f6 is in state STARTED 2026-02-28 01:08:04.751314 | orchestrator | 2026-02-28 01:08:04 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:08:04.751439 | orchestrator | 2026-02-28 01:08:04 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:08:07.792235 | orchestrator | 2026-02-28 01:08:07 | INFO  | Task c15b47cf-1c0f-42de-b373-e0eb82491ed1 is in state STARTED 2026-02-28 01:08:07.792576 | orchestrator | 2026-02-28 01:08:07 | INFO  | Task 9b2f3619-d4ca-45c0-a7ab-ed503e9261db is in state STARTED 2026-02-28 01:08:07.793211 | orchestrator | 2026-02-28 01:08:07 | INFO  | Task 61b9db59-9efd-4e94-8bf1-8fbb3dd0b0f6 is in state STARTED 2026-02-28 01:08:07.794323 | orchestrator | 2026-02-28 01:08:07 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:08:07.794422 | orchestrator | 2026-02-28 01:08:07 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:08:10.826424 | orchestrator | 2026-02-28 01:08:10 | INFO  | Task c15b47cf-1c0f-42de-b373-e0eb82491ed1 is in state STARTED 2026-02-28 01:08:10.827374 | orchestrator | 2026-02-28 01:08:10 | INFO  | Task 9b2f3619-d4ca-45c0-a7ab-ed503e9261db is in state STARTED 2026-02-28 01:08:10.828465 | orchestrator | 2026-02-28 01:08:10 | INFO  | Task 61b9db59-9efd-4e94-8bf1-8fbb3dd0b0f6 is in state STARTED 2026-02-28 01:08:10.829949 | orchestrator | 2026-02-28 01:08:10 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:08:10.830061 | orchestrator | 2026-02-28 01:08:10 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:08:13.889780 | orchestrator | 2026-02-28 01:08:13 | INFO  | Task c15b47cf-1c0f-42de-b373-e0eb82491ed1 is in state STARTED 2026-02-28 01:08:13.890477 | orchestrator | 2026-02-28 01:08:13 | INFO  | Task 9b2f3619-d4ca-45c0-a7ab-ed503e9261db is in state STARTED 2026-02-28 01:08:13.891733 | orchestrator | 2026-02-28 01:08:13 | INFO  | Task 61b9db59-9efd-4e94-8bf1-8fbb3dd0b0f6 is in state STARTED 2026-02-28 01:08:13.892744 | orchestrator | 2026-02-28 01:08:13 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:08:13.892768 | orchestrator | 2026-02-28 01:08:13 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:08:16.927227 | orchestrator | 2026-02-28 01:08:16 | INFO  | Task c15b47cf-1c0f-42de-b373-e0eb82491ed1 is in state STARTED 2026-02-28 01:08:16.928557 | orchestrator | 2026-02-28 01:08:16 | INFO  | Task 9b2f3619-d4ca-45c0-a7ab-ed503e9261db is in state STARTED 2026-02-28 01:08:16.929624 | orchestrator | 2026-02-28 01:08:16 | INFO  | Task 61b9db59-9efd-4e94-8bf1-8fbb3dd0b0f6 is in state STARTED 2026-02-28 01:08:16.931576 | orchestrator | 2026-02-28 01:08:16 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:08:16.931633 | orchestrator | 2026-02-28 01:08:16 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:08:19.962908 | orchestrator | 2026-02-28 01:08:19 | INFO  | Task c15b47cf-1c0f-42de-b373-e0eb82491ed1 is in state STARTED 2026-02-28 01:08:19.964714 | orchestrator | 2026-02-28 01:08:19 | INFO  | Task 9b2f3619-d4ca-45c0-a7ab-ed503e9261db is in state STARTED 2026-02-28 01:08:19.965676 | orchestrator | 2026-02-28 01:08:19 | INFO  | Task 61b9db59-9efd-4e94-8bf1-8fbb3dd0b0f6 is in state STARTED 2026-02-28 01:08:19.967565 | orchestrator | 2026-02-28 01:08:19 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:08:19.967622 | orchestrator | 2026-02-28 01:08:19 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:08:23.039367 | orchestrator | 2026-02-28 01:08:23 | INFO  | Task c15b47cf-1c0f-42de-b373-e0eb82491ed1 is in state STARTED 2026-02-28 01:08:23.040056 | orchestrator | 2026-02-28 01:08:23 | INFO  | Task 9b2f3619-d4ca-45c0-a7ab-ed503e9261db is in state STARTED 2026-02-28 01:08:23.042753 | orchestrator | 2026-02-28 01:08:23 | INFO  | Task 61b9db59-9efd-4e94-8bf1-8fbb3dd0b0f6 is in state STARTED 2026-02-28 01:08:23.043949 | orchestrator | 2026-02-28 01:08:23 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:08:23.043988 | orchestrator | 2026-02-28 01:08:23 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:08:26.101874 | orchestrator | 2026-02-28 01:08:26 | INFO  | Task c15b47cf-1c0f-42de-b373-e0eb82491ed1 is in state STARTED 2026-02-28 01:08:26.102679 | orchestrator | 2026-02-28 01:08:26 | INFO  | Task 9b2f3619-d4ca-45c0-a7ab-ed503e9261db is in state STARTED 2026-02-28 01:08:26.103588 | orchestrator | 2026-02-28 01:08:26 | INFO  | Task 61b9db59-9efd-4e94-8bf1-8fbb3dd0b0f6 is in state STARTED 2026-02-28 01:08:26.104412 | orchestrator | 2026-02-28 01:08:26 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:08:26.104440 | orchestrator | 2026-02-28 01:08:26 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:08:29.127288 | orchestrator | 2026-02-28 01:08:29 | INFO  | Task c15b47cf-1c0f-42de-b373-e0eb82491ed1 is in state STARTED 2026-02-28 01:08:29.127714 | orchestrator | 2026-02-28 01:08:29 | INFO  | Task 9b2f3619-d4ca-45c0-a7ab-ed503e9261db is in state STARTED 2026-02-28 01:08:29.128160 | orchestrator | 2026-02-28 01:08:29 | INFO  | Task 61b9db59-9efd-4e94-8bf1-8fbb3dd0b0f6 is in state STARTED 2026-02-28 01:08:29.128865 | orchestrator | 2026-02-28 01:08:29 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:08:29.128891 | orchestrator | 2026-02-28 01:08:29 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:08:32.151893 | orchestrator | 2026-02-28 01:08:32 | INFO  | Task c15b47cf-1c0f-42de-b373-e0eb82491ed1 is in state STARTED 2026-02-28 01:08:32.152021 | orchestrator | 2026-02-28 01:08:32 | INFO  | Task 9b2f3619-d4ca-45c0-a7ab-ed503e9261db is in state STARTED 2026-02-28 01:08:32.152737 | orchestrator | 2026-02-28 01:08:32 | INFO  | Task 61b9db59-9efd-4e94-8bf1-8fbb3dd0b0f6 is in state STARTED 2026-02-28 01:08:32.152981 | orchestrator | 2026-02-28 01:08:32 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:08:32.153014 | orchestrator | 2026-02-28 01:08:32 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:08:35.175542 | orchestrator | 2026-02-28 01:08:35 | INFO  | Task c15b47cf-1c0f-42de-b373-e0eb82491ed1 is in state STARTED 2026-02-28 01:08:35.175790 | orchestrator | 2026-02-28 01:08:35 | INFO  | Task 9b2f3619-d4ca-45c0-a7ab-ed503e9261db is in state STARTED 2026-02-28 01:08:35.177029 | orchestrator | 2026-02-28 01:08:35 | INFO  | Task 61b9db59-9efd-4e94-8bf1-8fbb3dd0b0f6 is in state STARTED 2026-02-28 01:08:35.177762 | orchestrator | 2026-02-28 01:08:35 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:08:35.177796 | orchestrator | 2026-02-28 01:08:35 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:08:38.202366 | orchestrator | 2026-02-28 01:08:38 | INFO  | Task c15b47cf-1c0f-42de-b373-e0eb82491ed1 is in state STARTED 2026-02-28 01:08:38.202595 | orchestrator | 2026-02-28 01:08:38 | INFO  | Task 9b2f3619-d4ca-45c0-a7ab-ed503e9261db is in state STARTED 2026-02-28 01:08:38.203529 | orchestrator | 2026-02-28 01:08:38 | INFO  | Task 61b9db59-9efd-4e94-8bf1-8fbb3dd0b0f6 is in state STARTED 2026-02-28 01:08:38.203938 | orchestrator | 2026-02-28 01:08:38 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:08:38.204041 | orchestrator | 2026-02-28 01:08:38 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:08:41.232637 | orchestrator | 2026-02-28 01:08:41 | INFO  | Task c15b47cf-1c0f-42de-b373-e0eb82491ed1 is in state STARTED 2026-02-28 01:08:41.233125 | orchestrator | 2026-02-28 01:08:41 | INFO  | Task 9b2f3619-d4ca-45c0-a7ab-ed503e9261db is in state STARTED 2026-02-28 01:08:41.233971 | orchestrator | 2026-02-28 01:08:41 | INFO  | Task 61b9db59-9efd-4e94-8bf1-8fbb3dd0b0f6 is in state STARTED 2026-02-28 01:08:41.234841 | orchestrator | 2026-02-28 01:08:41 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:08:41.234869 | orchestrator | 2026-02-28 01:08:41 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:08:44.263816 | orchestrator | 2026-02-28 01:08:44 | INFO  | Task c15b47cf-1c0f-42de-b373-e0eb82491ed1 is in state STARTED 2026-02-28 01:08:44.264064 | orchestrator | 2026-02-28 01:08:44 | INFO  | Task 9b2f3619-d4ca-45c0-a7ab-ed503e9261db is in state STARTED 2026-02-28 01:08:44.265923 | orchestrator | 2026-02-28 01:08:44 | INFO  | Task 61b9db59-9efd-4e94-8bf1-8fbb3dd0b0f6 is in state STARTED 2026-02-28 01:08:44.266585 | orchestrator | 2026-02-28 01:08:44 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:08:44.266621 | orchestrator | 2026-02-28 01:08:44 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:08:47.301844 | orchestrator | 2026-02-28 01:08:47 | INFO  | Task c15b47cf-1c0f-42de-b373-e0eb82491ed1 is in state STARTED 2026-02-28 01:08:47.302348 | orchestrator | 2026-02-28 01:08:47 | INFO  | Task 9b2f3619-d4ca-45c0-a7ab-ed503e9261db is in state STARTED 2026-02-28 01:08:47.303014 | orchestrator | 2026-02-28 01:08:47 | INFO  | Task 61b9db59-9efd-4e94-8bf1-8fbb3dd0b0f6 is in state STARTED 2026-02-28 01:08:47.303819 | orchestrator | 2026-02-28 01:08:47 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:08:47.303841 | orchestrator | 2026-02-28 01:08:47 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:08:50.329167 | orchestrator | 2026-02-28 01:08:50 | INFO  | Task c15b47cf-1c0f-42de-b373-e0eb82491ed1 is in state STARTED 2026-02-28 01:08:50.330223 | orchestrator | 2026-02-28 01:08:50 | INFO  | Task 9b2f3619-d4ca-45c0-a7ab-ed503e9261db is in state STARTED 2026-02-28 01:08:50.331311 | orchestrator | 2026-02-28 01:08:50 | INFO  | Task 61b9db59-9efd-4e94-8bf1-8fbb3dd0b0f6 is in state STARTED 2026-02-28 01:08:50.333555 | orchestrator | 2026-02-28 01:08:50 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:08:50.333596 | orchestrator | 2026-02-28 01:08:50 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:08:53.367613 | orchestrator | 2026-02-28 01:08:53 | INFO  | Task c15b47cf-1c0f-42de-b373-e0eb82491ed1 is in state STARTED 2026-02-28 01:08:53.368361 | orchestrator | 2026-02-28 01:08:53 | INFO  | Task 9b2f3619-d4ca-45c0-a7ab-ed503e9261db is in state STARTED 2026-02-28 01:08:53.369347 | orchestrator | 2026-02-28 01:08:53 | INFO  | Task 61b9db59-9efd-4e94-8bf1-8fbb3dd0b0f6 is in state STARTED 2026-02-28 01:08:53.370073 | orchestrator | 2026-02-28 01:08:53 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:08:53.370109 | orchestrator | 2026-02-28 01:08:53 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:08:56.404778 | orchestrator | 2026-02-28 01:08:56 | INFO  | Task c15b47cf-1c0f-42de-b373-e0eb82491ed1 is in state STARTED 2026-02-28 01:08:56.405282 | orchestrator | 2026-02-28 01:08:56 | INFO  | Task 9b2f3619-d4ca-45c0-a7ab-ed503e9261db is in state STARTED 2026-02-28 01:08:56.405729 | orchestrator | 2026-02-28 01:08:56 | INFO  | Task 61b9db59-9efd-4e94-8bf1-8fbb3dd0b0f6 is in state STARTED 2026-02-28 01:08:56.406356 | orchestrator | 2026-02-28 01:08:56 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:08:56.406509 | orchestrator | 2026-02-28 01:08:56 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:08:59.439939 | orchestrator | 2026-02-28 01:08:59 | INFO  | Task c15b47cf-1c0f-42de-b373-e0eb82491ed1 is in state STARTED 2026-02-28 01:08:59.440481 | orchestrator | 2026-02-28 01:08:59 | INFO  | Task 9b2f3619-d4ca-45c0-a7ab-ed503e9261db is in state STARTED 2026-02-28 01:08:59.442387 | orchestrator | 2026-02-28 01:08:59 | INFO  | Task 61b9db59-9efd-4e94-8bf1-8fbb3dd0b0f6 is in state STARTED 2026-02-28 01:08:59.442441 | orchestrator | 2026-02-28 01:08:59 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:08:59.442452 | orchestrator | 2026-02-28 01:08:59 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:09:02.468998 | orchestrator | 2026-02-28 01:09:02 | INFO  | Task c15b47cf-1c0f-42de-b373-e0eb82491ed1 is in state STARTED 2026-02-28 01:09:02.469244 | orchestrator | 2026-02-28 01:09:02 | INFO  | Task 9b2f3619-d4ca-45c0-a7ab-ed503e9261db is in state STARTED 2026-02-28 01:09:02.470091 | orchestrator | 2026-02-28 01:09:02 | INFO  | Task 61b9db59-9efd-4e94-8bf1-8fbb3dd0b0f6 is in state STARTED 2026-02-28 01:09:02.470820 | orchestrator | 2026-02-28 01:09:02 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:09:02.470853 | orchestrator | 2026-02-28 01:09:02 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:09:05.498375 | orchestrator | 2026-02-28 01:09:05 | INFO  | Task c15b47cf-1c0f-42de-b373-e0eb82491ed1 is in state STARTED 2026-02-28 01:09:05.499037 | orchestrator | 2026-02-28 01:09:05 | INFO  | Task 9b2f3619-d4ca-45c0-a7ab-ed503e9261db is in state STARTED 2026-02-28 01:09:05.499821 | orchestrator | 2026-02-28 01:09:05 | INFO  | Task 61b9db59-9efd-4e94-8bf1-8fbb3dd0b0f6 is in state STARTED 2026-02-28 01:09:05.500755 | orchestrator | 2026-02-28 01:09:05 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:09:05.500789 | orchestrator | 2026-02-28 01:09:05 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:09:08.527511 | orchestrator | 2026-02-28 01:09:08 | INFO  | Task c15b47cf-1c0f-42de-b373-e0eb82491ed1 is in state STARTED 2026-02-28 01:09:08.528151 | orchestrator | 2026-02-28 01:09:08 | INFO  | Task 9b2f3619-d4ca-45c0-a7ab-ed503e9261db is in state STARTED 2026-02-28 01:09:08.528889 | orchestrator | 2026-02-28 01:09:08 | INFO  | Task 61b9db59-9efd-4e94-8bf1-8fbb3dd0b0f6 is in state STARTED 2026-02-28 01:09:08.531051 | orchestrator | 2026-02-28 01:09:08 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:09:08.531113 | orchestrator | 2026-02-28 01:09:08 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:09:11.567412 | orchestrator | 2026-02-28 01:09:11 | INFO  | Task c15b47cf-1c0f-42de-b373-e0eb82491ed1 is in state STARTED 2026-02-28 01:09:11.571353 | orchestrator | 2026-02-28 01:09:11 | INFO  | Task 9b2f3619-d4ca-45c0-a7ab-ed503e9261db is in state STARTED 2026-02-28 01:09:11.572860 | orchestrator | 2026-02-28 01:09:11 | INFO  | Task 61b9db59-9efd-4e94-8bf1-8fbb3dd0b0f6 is in state STARTED 2026-02-28 01:09:11.576425 | orchestrator | 2026-02-28 01:09:11 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:09:11.576483 | orchestrator | 2026-02-28 01:09:11 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:09:14.637086 | orchestrator | 2026-02-28 01:09:14 | INFO  | Task c15b47cf-1c0f-42de-b373-e0eb82491ed1 is in state STARTED 2026-02-28 01:09:14.638157 | orchestrator | 2026-02-28 01:09:14 | INFO  | Task 9b2f3619-d4ca-45c0-a7ab-ed503e9261db is in state STARTED 2026-02-28 01:09:14.639433 | orchestrator | 2026-02-28 01:09:14 | INFO  | Task 61b9db59-9efd-4e94-8bf1-8fbb3dd0b0f6 is in state STARTED 2026-02-28 01:09:14.641242 | orchestrator | 2026-02-28 01:09:14 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:09:14.641290 | orchestrator | 2026-02-28 01:09:14 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:09:17.688941 | orchestrator | 2026-02-28 01:09:17 | INFO  | Task c15b47cf-1c0f-42de-b373-e0eb82491ed1 is in state STARTED 2026-02-28 01:09:17.689477 | orchestrator | 2026-02-28 01:09:17 | INFO  | Task 9b2f3619-d4ca-45c0-a7ab-ed503e9261db is in state STARTED 2026-02-28 01:09:17.690897 | orchestrator | 2026-02-28 01:09:17 | INFO  | Task 61b9db59-9efd-4e94-8bf1-8fbb3dd0b0f6 is in state STARTED 2026-02-28 01:09:17.691991 | orchestrator | 2026-02-28 01:09:17 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:09:17.692042 | orchestrator | 2026-02-28 01:09:17 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:09:20.730352 | orchestrator | 2026-02-28 01:09:20 | INFO  | Task c15b47cf-1c0f-42de-b373-e0eb82491ed1 is in state STARTED 2026-02-28 01:09:20.731742 | orchestrator | 2026-02-28 01:09:20 | INFO  | Task 9b2f3619-d4ca-45c0-a7ab-ed503e9261db is in state STARTED 2026-02-28 01:09:20.734284 | orchestrator | 2026-02-28 01:09:20 | INFO  | Task 61b9db59-9efd-4e94-8bf1-8fbb3dd0b0f6 is in state STARTED 2026-02-28 01:09:20.736293 | orchestrator | 2026-02-28 01:09:20 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:09:20.736344 | orchestrator | 2026-02-28 01:09:20 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:09:23.772740 | orchestrator | 2026-02-28 01:09:23 | INFO  | Task c15b47cf-1c0f-42de-b373-e0eb82491ed1 is in state STARTED 2026-02-28 01:09:23.773249 | orchestrator | 2026-02-28 01:09:23 | INFO  | Task 9b2f3619-d4ca-45c0-a7ab-ed503e9261db is in state STARTED 2026-02-28 01:09:23.774067 | orchestrator | 2026-02-28 01:09:23 | INFO  | Task 61b9db59-9efd-4e94-8bf1-8fbb3dd0b0f6 is in state STARTED 2026-02-28 01:09:23.775190 | orchestrator | 2026-02-28 01:09:23 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:09:23.775226 | orchestrator | 2026-02-28 01:09:23 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:09:26.808150 | orchestrator | 2026-02-28 01:09:26 | INFO  | Task c15b47cf-1c0f-42de-b373-e0eb82491ed1 is in state STARTED 2026-02-28 01:09:26.812042 | orchestrator | 2026-02-28 01:09:26 | INFO  | Task 9b2f3619-d4ca-45c0-a7ab-ed503e9261db is in state STARTED 2026-02-28 01:09:26.812105 | orchestrator | 2026-02-28 01:09:26 | INFO  | Task 61b9db59-9efd-4e94-8bf1-8fbb3dd0b0f6 is in state STARTED 2026-02-28 01:09:26.812808 | orchestrator | 2026-02-28 01:09:26 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:09:26.812957 | orchestrator | 2026-02-28 01:09:26 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:09:29.847108 | orchestrator | 2026-02-28 01:09:29 | INFO  | Task c15b47cf-1c0f-42de-b373-e0eb82491ed1 is in state STARTED 2026-02-28 01:09:29.847526 | orchestrator | 2026-02-28 01:09:29 | INFO  | Task 9b2f3619-d4ca-45c0-a7ab-ed503e9261db is in state STARTED 2026-02-28 01:09:29.848634 | orchestrator | 2026-02-28 01:09:29 | INFO  | Task 61b9db59-9efd-4e94-8bf1-8fbb3dd0b0f6 is in state STARTED 2026-02-28 01:09:29.852546 | orchestrator | 2026-02-28 01:09:29 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:09:29.852746 | orchestrator | 2026-02-28 01:09:29 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:09:32.892466 | orchestrator | 2026-02-28 01:09:32 | INFO  | Task c15b47cf-1c0f-42de-b373-e0eb82491ed1 is in state STARTED 2026-02-28 01:09:32.894830 | orchestrator | 2026-02-28 01:09:32 | INFO  | Task 9b2f3619-d4ca-45c0-a7ab-ed503e9261db is in state STARTED 2026-02-28 01:09:32.896172 | orchestrator | 2026-02-28 01:09:32 | INFO  | Task 61b9db59-9efd-4e94-8bf1-8fbb3dd0b0f6 is in state STARTED 2026-02-28 01:09:32.897562 | orchestrator | 2026-02-28 01:09:32 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:09:32.897595 | orchestrator | 2026-02-28 01:09:32 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:09:35.931867 | orchestrator | 2026-02-28 01:09:35 | INFO  | Task c15b47cf-1c0f-42de-b373-e0eb82491ed1 is in state STARTED 2026-02-28 01:09:35.932251 | orchestrator | 2026-02-28 01:09:35 | INFO  | Task 9b2f3619-d4ca-45c0-a7ab-ed503e9261db is in state STARTED 2026-02-28 01:09:35.933154 | orchestrator | 2026-02-28 01:09:35 | INFO  | Task 61b9db59-9efd-4e94-8bf1-8fbb3dd0b0f6 is in state STARTED 2026-02-28 01:09:35.934003 | orchestrator | 2026-02-28 01:09:35 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:09:35.934076 | orchestrator | 2026-02-28 01:09:35 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:09:38.976855 | orchestrator | 2026-02-28 01:09:38 | INFO  | Task cdca8b1c-1211-47b8-adec-1ec76a27efa1 is in state STARTED 2026-02-28 01:09:38.978723 | orchestrator | 2026-02-28 01:09:38 | INFO  | Task c15b47cf-1c0f-42de-b373-e0eb82491ed1 is in state STARTED 2026-02-28 01:09:38.985497 | orchestrator | 2026-02-28 01:09:38 | INFO  | Task 9b2f3619-d4ca-45c0-a7ab-ed503e9261db is in state SUCCESS 2026-02-28 01:09:38.985583 | orchestrator | 2026-02-28 01:09:38.987790 | orchestrator | 2026-02-28 01:09:38.987845 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-28 01:09:38.987853 | orchestrator | 2026-02-28 01:09:38.987858 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-28 01:09:38.987863 | orchestrator | Saturday 28 February 2026 01:07:13 +0000 (0:00:00.303) 0:00:00.303 ***** 2026-02-28 01:09:38.987867 | orchestrator | ok: [testbed-node-0] 2026-02-28 01:09:38.987872 | orchestrator | ok: [testbed-node-1] 2026-02-28 01:09:38.987877 | orchestrator | ok: [testbed-node-2] 2026-02-28 01:09:38.987881 | orchestrator | 2026-02-28 01:09:38.987885 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-28 01:09:38.987889 | orchestrator | Saturday 28 February 2026 01:07:13 +0000 (0:00:00.367) 0:00:00.670 ***** 2026-02-28 01:09:38.987894 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2026-02-28 01:09:38.987899 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2026-02-28 01:09:38.987903 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2026-02-28 01:09:38.987907 | orchestrator | 2026-02-28 01:09:38.987910 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2026-02-28 01:09:38.987914 | orchestrator | 2026-02-28 01:09:38.987918 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-02-28 01:09:38.987922 | orchestrator | Saturday 28 February 2026 01:07:13 +0000 (0:00:00.474) 0:00:01.145 ***** 2026-02-28 01:09:38.987927 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 01:09:38.987931 | orchestrator | 2026-02-28 01:09:38.987935 | orchestrator | TASK [service-ks-register : barbican | Creating/deleting services] ************* 2026-02-28 01:09:38.987939 | orchestrator | Saturday 28 February 2026 01:07:14 +0000 (0:00:00.599) 0:00:01.744 ***** 2026-02-28 01:09:38.987944 | orchestrator | changed: [testbed-node-0] => (item=barbican (key-manager)) 2026-02-28 01:09:38.987947 | orchestrator | 2026-02-28 01:09:38.987963 | orchestrator | TASK [service-ks-register : barbican | Creating/deleting endpoints] ************ 2026-02-28 01:09:38.987982 | orchestrator | Saturday 28 February 2026 01:07:18 +0000 (0:00:03.805) 0:00:05.549 ***** 2026-02-28 01:09:38.987986 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api-int.testbed.osism.xyz:9311 -> internal) 2026-02-28 01:09:38.987990 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api.testbed.osism.xyz:9311 -> public) 2026-02-28 01:09:38.987994 | orchestrator | 2026-02-28 01:09:38.987998 | orchestrator | TASK [service-ks-register : barbican | Creating projects] ********************** 2026-02-28 01:09:38.988002 | orchestrator | Saturday 28 February 2026 01:07:25 +0000 (0:00:07.619) 0:00:13.169 ***** 2026-02-28 01:09:38.988006 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-28 01:09:38.988062 | orchestrator | 2026-02-28 01:09:38.988102 | orchestrator | TASK [service-ks-register : barbican | Creating users] ************************* 2026-02-28 01:09:38.988108 | orchestrator | Saturday 28 February 2026 01:07:29 +0000 (0:00:03.699) 0:00:16.868 ***** 2026-02-28 01:09:38.988112 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service) 2026-02-28 01:09:38.988116 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-28 01:09:38.988120 | orchestrator | 2026-02-28 01:09:38.988124 | orchestrator | TASK [service-ks-register : barbican | Creating roles] ************************* 2026-02-28 01:09:38.988127 | orchestrator | Saturday 28 February 2026 01:07:33 +0000 (0:00:03.829) 0:00:20.697 ***** 2026-02-28 01:09:38.988131 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-28 01:09:38.988136 | orchestrator | changed: [testbed-node-0] => (item=key-manager:service-admin) 2026-02-28 01:09:38.988140 | orchestrator | changed: [testbed-node-0] => (item=creator) 2026-02-28 01:09:38.988143 | orchestrator | changed: [testbed-node-0] => (item=observer) 2026-02-28 01:09:38.988147 | orchestrator | changed: [testbed-node-0] => (item=audit) 2026-02-28 01:09:38.988151 | orchestrator | 2026-02-28 01:09:38.988155 | orchestrator | TASK [service-ks-register : barbican | Granting/revoking user roles] *********** 2026-02-28 01:09:38.988159 | orchestrator | Saturday 28 February 2026 01:07:50 +0000 (0:00:17.268) 0:00:37.966 ***** 2026-02-28 01:09:38.988163 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service -> admin) 2026-02-28 01:09:38.988166 | orchestrator | 2026-02-28 01:09:38.988170 | orchestrator | TASK [barbican : Ensuring config directories exist] **************************** 2026-02-28 01:09:38.988174 | orchestrator | Saturday 28 February 2026 01:07:54 +0000 (0:00:03.743) 0:00:41.710 ***** 2026-02-28 01:09:38.988217 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-28 01:09:38.988238 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-28 01:09:38.988253 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-28 01:09:38.988259 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-28 01:09:38.988419 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-28 01:09:38.988436 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-28 01:09:38.988462 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-28 01:09:38.988478 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-28 01:09:38.988487 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-28 01:09:38.988491 | orchestrator | 2026-02-28 01:09:38.988495 | orchestrator | TASK [barbican : Ensuring vassals config directories exist] ******************** 2026-02-28 01:09:38.988499 | orchestrator | Saturday 28 February 2026 01:07:56 +0000 (0:00:01.924) 0:00:43.634 ***** 2026-02-28 01:09:38.988503 | orchestrator | changed: [testbed-node-1] => (item=barbican-api/vassals) 2026-02-28 01:09:38.988507 | orchestrator | changed: [testbed-node-0] => (item=barbican-api/vassals) 2026-02-28 01:09:38.988511 | orchestrator | changed: [testbed-node-2] => (item=barbican-api/vassals) 2026-02-28 01:09:38.988515 | orchestrator | 2026-02-28 01:09:38.988519 | orchestrator | TASK [barbican : Check if policies shall be overwritten] *********************** 2026-02-28 01:09:38.988523 | orchestrator | Saturday 28 February 2026 01:07:57 +0000 (0:00:01.423) 0:00:45.058 ***** 2026-02-28 01:09:38.988527 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:09:38.988530 | orchestrator | 2026-02-28 01:09:38.988534 | orchestrator | TASK [barbican : Set barbican policy file] ************************************* 2026-02-28 01:09:38.988538 | orchestrator | Saturday 28 February 2026 01:07:58 +0000 (0:00:00.146) 0:00:45.204 ***** 2026-02-28 01:09:38.988542 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:09:38.988546 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:09:38.988550 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:09:38.988554 | orchestrator | 2026-02-28 01:09:38.988558 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-02-28 01:09:38.988562 | orchestrator | Saturday 28 February 2026 01:07:58 +0000 (0:00:00.991) 0:00:46.196 ***** 2026-02-28 01:09:38.988566 | orchestrator | included: /ansible/roles/barbican/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 01:09:38.988570 | orchestrator | 2026-02-28 01:09:38.988574 | orchestrator | TASK [service-cert-copy : barbican | Copying over extra CA certificates] ******* 2026-02-28 01:09:38.988578 | orchestrator | Saturday 28 February 2026 01:08:00 +0000 (0:00:01.464) 0:00:47.661 ***** 2026-02-28 01:09:38.988582 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-28 01:09:38.988605 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-28 01:09:38.988619 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-28 01:09:38.988623 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-28 01:09:38.988628 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-28 01:09:38.988632 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-28 01:09:38.988643 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-28 01:09:38.988648 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-28 01:09:38.988655 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-28 01:09:38.988659 | orchestrator | 2026-02-28 01:09:38.988663 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS certificate] *** 2026-02-28 01:09:38.988667 | orchestrator | Saturday 28 February 2026 01:08:04 +0000 (0:00:03.956) 0:00:51.618 ***** 2026-02-28 01:09:38.988671 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-28 01:09:38.988693 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-28 01:09:38.988704 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-28 01:09:38.988708 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:09:38.988712 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-28 01:09:38.988717 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-28 01:09:38.988721 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-28 01:09:38.988726 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-28 01:09:38.988733 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:09:38.988740 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-28 01:09:38.988744 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-28 01:09:38.988773 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:09:38.988777 | orchestrator | 2026-02-28 01:09:38.988781 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS key] **** 2026-02-28 01:09:38.988785 | orchestrator | Saturday 28 February 2026 01:08:05 +0000 (0:00:01.195) 0:00:52.813 ***** 2026-02-28 01:09:38.988792 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-28 01:09:38.988796 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-28 01:09:38.988800 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-28 01:09:38.988810 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:09:38.988818 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-28 01:09:38.988823 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-28 01:09:38.988830 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-28 01:09:38.988834 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:09:38.988838 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-28 01:09:38.988842 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-28 01:09:38.988849 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-28 01:09:38.988853 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:09:38.988857 | orchestrator | 2026-02-28 01:09:38.988861 | orchestrator | TASK [barbican : Copying over config.json files for services] ****************** 2026-02-28 01:09:38.988865 | orchestrator | Saturday 28 February 2026 01:08:06 +0000 (0:00:01.121) 0:00:53.935 ***** 2026-02-28 01:09:38.988872 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-28 01:09:38.988880 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-28 01:09:38.988885 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-28 01:09:38.988892 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-28 01:09:38.988900 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-28 01:09:38.988904 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-28 01:09:38.988911 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-28 01:09:38.988915 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-28 01:09:38.988919 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-28 01:09:38.988930 | orchestrator | 2026-02-28 01:09:38.988934 | orchestrator | TASK [barbican : Copying over barbican-api.ini] ******************************** 2026-02-28 01:09:38.988937 | orchestrator | Saturday 28 February 2026 01:08:11 +0000 (0:00:04.712) 0:00:58.647 ***** 2026-02-28 01:09:38.988942 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:09:38.988945 | orchestrator | changed: [testbed-node-1] 2026-02-28 01:09:38.988949 | orchestrator | changed: [testbed-node-2] 2026-02-28 01:09:38.988953 | orchestrator | 2026-02-28 01:09:38.988957 | orchestrator | TASK [barbican : Checking whether barbican-api-paste.ini file exists] ********** 2026-02-28 01:09:38.988961 | orchestrator | Saturday 28 February 2026 01:08:15 +0000 (0:00:04.540) 0:01:03.188 ***** 2026-02-28 01:09:38.988965 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-28 01:09:38.988968 | orchestrator | 2026-02-28 01:09:38.988972 | orchestrator | TASK [barbican : Copying over barbican-api-paste.ini] ************************** 2026-02-28 01:09:38.988976 | orchestrator | Saturday 28 February 2026 01:08:18 +0000 (0:00:02.298) 0:01:05.486 ***** 2026-02-28 01:09:38.988980 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:09:38.988984 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:09:38.988988 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:09:38.988991 | orchestrator | 2026-02-28 01:09:38.988995 | orchestrator | TASK [barbican : Copying over barbican.conf] *********************************** 2026-02-28 01:09:38.988999 | orchestrator | Saturday 28 February 2026 01:08:19 +0000 (0:00:01.361) 0:01:06.848 ***** 2026-02-28 01:09:38.989005 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-28 01:09:38.989013 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-28 01:09:38.989017 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-28 01:09:38.989025 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-28 01:09:38.989029 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-28 01:09:38.989037 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-28 01:09:38.989041 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-28 01:09:38.989048 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-28 01:09:38.989056 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-28 01:09:38.989061 | orchestrator | 2026-02-28 01:09:38.989065 | orchestrator | TASK [barbican : Copying over existing policy file] **************************** 2026-02-28 01:09:38.989070 | orchestrator | Saturday 28 February 2026 01:08:31 +0000 (0:00:12.062) 0:01:18.910 ***** 2026-02-28 01:09:38.989074 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-28 01:09:38.989082 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-28 01:09:38.989087 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-28 01:09:38.989091 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:09:38.989099 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-28 01:09:38.989107 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-28 01:09:38.989112 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-28 01:09:38.989116 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:09:38.989121 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-28 01:09:38.989129 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-28 01:09:38.989134 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-28 01:09:38.989142 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:09:38.989147 | orchestrator | 2026-02-28 01:09:38.989154 | orchestrator | TASK [service-check-containers : barbican | Check containers] ****************** 2026-02-28 01:09:38.989159 | orchestrator | Saturday 28 February 2026 01:08:33 +0000 (0:00:01.590) 0:01:20.500 ***** 2026-02-28 01:09:38.989164 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-28 01:09:38.989169 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-28 01:09:38.989177 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-28 01:09:38.989182 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-28 01:09:38.989193 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-28 01:09:38.989198 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-28 01:09:38.989203 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-28 01:09:38.989207 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-28 01:09:38.989214 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-28 01:09:38.989219 | orchestrator | 2026-02-28 01:09:38.989224 | orchestrator | TASK [service-check-containers : barbican | Notify handlers to restart containers] *** 2026-02-28 01:09:38.989228 | orchestrator | Saturday 28 February 2026 01:08:37 +0000 (0:00:03.797) 0:01:24.298 ***** 2026-02-28 01:09:38.989233 | orchestrator | changed: [testbed-node-0] => { 2026-02-28 01:09:38.989237 | orchestrator |  "msg": "Notifying handlers" 2026-02-28 01:09:38.989242 | orchestrator | } 2026-02-28 01:09:38.989246 | orchestrator | changed: [testbed-node-1] => { 2026-02-28 01:09:38.989254 | orchestrator |  "msg": "Notifying handlers" 2026-02-28 01:09:38.989258 | orchestrator | } 2026-02-28 01:09:38.989263 | orchestrator | changed: [testbed-node-2] => { 2026-02-28 01:09:38.989267 | orchestrator |  "msg": "Notifying handlers" 2026-02-28 01:09:38.989272 | orchestrator | } 2026-02-28 01:09:38.989276 | orchestrator | 2026-02-28 01:09:38.989281 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-28 01:09:38.989285 | orchestrator | Saturday 28 February 2026 01:08:37 +0000 (0:00:00.349) 0:01:24.648 ***** 2026-02-28 01:09:38.989292 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-28 01:09:38.989298 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-28 01:09:38.989302 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-28 01:09:38.989307 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:09:38.989314 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-28 01:09:38.989320 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-28 01:09:38.989328 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-28 01:09:38.989335 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:09:38.989340 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-28 01:09:38.989345 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-28 01:09:38.989350 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-28 01:09:38.989354 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:09:38.989358 | orchestrator | 2026-02-28 01:09:38.989363 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-02-28 01:09:38.989367 | orchestrator | Saturday 28 February 2026 01:08:38 +0000 (0:00:01.455) 0:01:26.103 ***** 2026-02-28 01:09:38.989372 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:09:38.989379 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:09:38.989384 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:09:38.989388 | orchestrator | 2026-02-28 01:09:38.989393 | orchestrator | TASK [barbican : Creating barbican database] *********************************** 2026-02-28 01:09:38.989399 | orchestrator | Saturday 28 February 2026 01:08:40 +0000 (0:00:01.598) 0:01:27.702 ***** 2026-02-28 01:09:38.989404 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:09:38.989409 | orchestrator | 2026-02-28 01:09:38.989413 | orchestrator | TASK [barbican : Creating barbican database user and setting permissions] ****** 2026-02-28 01:09:38.989417 | orchestrator | Saturday 28 February 2026 01:08:43 +0000 (0:00:02.679) 0:01:30.382 ***** 2026-02-28 01:09:38.989421 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:09:38.989425 | orchestrator | 2026-02-28 01:09:38.989429 | orchestrator | TASK [barbican : Running barbican bootstrap container] ************************* 2026-02-28 01:09:38.989432 | orchestrator | Saturday 28 February 2026 01:08:45 +0000 (0:00:02.802) 0:01:33.184 ***** 2026-02-28 01:09:38.989436 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:09:38.989440 | orchestrator | 2026-02-28 01:09:38.989444 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-02-28 01:09:38.989448 | orchestrator | Saturday 28 February 2026 01:08:58 +0000 (0:00:12.253) 0:01:45.438 ***** 2026-02-28 01:09:38.989451 | orchestrator | 2026-02-28 01:09:38.989455 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-02-28 01:09:38.989459 | orchestrator | Saturday 28 February 2026 01:08:58 +0000 (0:00:00.172) 0:01:45.611 ***** 2026-02-28 01:09:38.989463 | orchestrator | 2026-02-28 01:09:38.989467 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-02-28 01:09:38.989471 | orchestrator | Saturday 28 February 2026 01:08:58 +0000 (0:00:00.173) 0:01:45.784 ***** 2026-02-28 01:09:38.989474 | orchestrator | 2026-02-28 01:09:38.989478 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-api container] ******************** 2026-02-28 01:09:38.989482 | orchestrator | Saturday 28 February 2026 01:08:58 +0000 (0:00:00.210) 0:01:45.995 ***** 2026-02-28 01:09:38.989486 | orchestrator | changed: [testbed-node-2] 2026-02-28 01:09:38.989490 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:09:38.989494 | orchestrator | changed: [testbed-node-1] 2026-02-28 01:09:38.989498 | orchestrator | 2026-02-28 01:09:38.989501 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-keystone-listener container] ****** 2026-02-28 01:09:38.989508 | orchestrator | Saturday 28 February 2026 01:09:10 +0000 (0:00:12.014) 0:01:58.010 ***** 2026-02-28 01:09:38.989512 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:09:38.989515 | orchestrator | changed: [testbed-node-2] 2026-02-28 01:09:38.989519 | orchestrator | changed: [testbed-node-1] 2026-02-28 01:09:38.989523 | orchestrator | 2026-02-28 01:09:38.989527 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-worker container] ***************** 2026-02-28 01:09:38.989531 | orchestrator | Saturday 28 February 2026 01:09:24 +0000 (0:00:13.950) 0:02:11.960 ***** 2026-02-28 01:09:38.989535 | orchestrator | changed: [testbed-node-2] 2026-02-28 01:09:38.989539 | orchestrator | changed: [testbed-node-1] 2026-02-28 01:09:38.989542 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:09:38.989546 | orchestrator | 2026-02-28 01:09:38.989550 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-28 01:09:38.989555 | orchestrator | testbed-node-0 : ok=25  changed=19  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-02-28 01:09:38.989559 | orchestrator | testbed-node-1 : ok=15  changed=11  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-28 01:09:38.989563 | orchestrator | testbed-node-2 : ok=15  changed=11  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-28 01:09:38.989567 | orchestrator | 2026-02-28 01:09:38.989571 | orchestrator | 2026-02-28 01:09:38.989575 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-28 01:09:38.989582 | orchestrator | Saturday 28 February 2026 01:09:35 +0000 (0:00:10.593) 0:02:22.553 ***** 2026-02-28 01:09:38.989586 | orchestrator | =============================================================================== 2026-02-28 01:09:38.989590 | orchestrator | service-ks-register : barbican | Creating roles ------------------------ 17.27s 2026-02-28 01:09:38.989594 | orchestrator | barbican : Restart barbican-keystone-listener container ---------------- 13.95s 2026-02-28 01:09:38.989597 | orchestrator | barbican : Running barbican bootstrap container ------------------------ 12.25s 2026-02-28 01:09:38.989601 | orchestrator | barbican : Copying over barbican.conf ---------------------------------- 12.06s 2026-02-28 01:09:38.989605 | orchestrator | barbican : Restart barbican-api container ------------------------------ 12.01s 2026-02-28 01:09:38.989609 | orchestrator | barbican : Restart barbican-worker container --------------------------- 10.59s 2026-02-28 01:09:38.989613 | orchestrator | service-ks-register : barbican | Creating/deleting endpoints ------------ 7.62s 2026-02-28 01:09:38.989617 | orchestrator | barbican : Copying over config.json files for services ------------------ 4.71s 2026-02-28 01:09:38.989620 | orchestrator | barbican : Copying over barbican-api.ini -------------------------------- 4.54s 2026-02-28 01:09:38.989624 | orchestrator | service-cert-copy : barbican | Copying over extra CA certificates ------- 3.96s 2026-02-28 01:09:38.989628 | orchestrator | service-ks-register : barbican | Creating users ------------------------- 3.83s 2026-02-28 01:09:38.989632 | orchestrator | service-ks-register : barbican | Creating/deleting services ------------- 3.81s 2026-02-28 01:09:38.989636 | orchestrator | service-check-containers : barbican | Check containers ------------------ 3.80s 2026-02-28 01:09:38.989639 | orchestrator | service-ks-register : barbican | Granting/revoking user roles ----------- 3.74s 2026-02-28 01:09:38.989643 | orchestrator | service-ks-register : barbican | Creating projects ---------------------- 3.70s 2026-02-28 01:09:38.989647 | orchestrator | barbican : Creating barbican database user and setting permissions ------ 2.80s 2026-02-28 01:09:38.989651 | orchestrator | barbican : Creating barbican database ----------------------------------- 2.68s 2026-02-28 01:09:38.989655 | orchestrator | barbican : Checking whether barbican-api-paste.ini file exists ---------- 2.30s 2026-02-28 01:09:38.989659 | orchestrator | barbican : Ensuring config directories exist ---------------------------- 1.92s 2026-02-28 01:09:38.989665 | orchestrator | barbican : include_tasks ------------------------------------------------ 1.60s 2026-02-28 01:09:38.989923 | orchestrator | 2026-02-28 01:09:38 | INFO  | Task 61b9db59-9efd-4e94-8bf1-8fbb3dd0b0f6 is in state STARTED 2026-02-28 01:09:38.991289 | orchestrator | 2026-02-28 01:09:38 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:09:38.991622 | orchestrator | 2026-02-28 01:09:38 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:09:42.042308 | orchestrator | 2026-02-28 01:09:42 | INFO  | Task cdca8b1c-1211-47b8-adec-1ec76a27efa1 is in state STARTED 2026-02-28 01:09:42.049273 | orchestrator | 2026-02-28 01:09:42 | INFO  | Task c15b47cf-1c0f-42de-b373-e0eb82491ed1 is in state STARTED 2026-02-28 01:09:42.049365 | orchestrator | 2026-02-28 01:09:42 | INFO  | Task 61b9db59-9efd-4e94-8bf1-8fbb3dd0b0f6 is in state STARTED 2026-02-28 01:09:42.049379 | orchestrator | 2026-02-28 01:09:42 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:09:42.049393 | orchestrator | 2026-02-28 01:09:42 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:09:45.092860 | orchestrator | 2026-02-28 01:09:45 | INFO  | Task cdca8b1c-1211-47b8-adec-1ec76a27efa1 is in state STARTED 2026-02-28 01:09:45.093782 | orchestrator | 2026-02-28 01:09:45 | INFO  | Task c15b47cf-1c0f-42de-b373-e0eb82491ed1 is in state STARTED 2026-02-28 01:09:45.094834 | orchestrator | 2026-02-28 01:09:45 | INFO  | Task 61b9db59-9efd-4e94-8bf1-8fbb3dd0b0f6 is in state STARTED 2026-02-28 01:09:45.096279 | orchestrator | 2026-02-28 01:09:45 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:09:45.096349 | orchestrator | 2026-02-28 01:09:45 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:09:48.131252 | orchestrator | 2026-02-28 01:09:48 | INFO  | Task cdca8b1c-1211-47b8-adec-1ec76a27efa1 is in state STARTED 2026-02-28 01:09:48.133157 | orchestrator | 2026-02-28 01:09:48 | INFO  | Task c15b47cf-1c0f-42de-b373-e0eb82491ed1 is in state STARTED 2026-02-28 01:09:48.135909 | orchestrator | 2026-02-28 01:09:48 | INFO  | Task 61b9db59-9efd-4e94-8bf1-8fbb3dd0b0f6 is in state STARTED 2026-02-28 01:09:48.138965 | orchestrator | 2026-02-28 01:09:48 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:09:48.139013 | orchestrator | 2026-02-28 01:09:48 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:09:51.178320 | orchestrator | 2026-02-28 01:09:51 | INFO  | Task cdca8b1c-1211-47b8-adec-1ec76a27efa1 is in state STARTED 2026-02-28 01:09:51.179555 | orchestrator | 2026-02-28 01:09:51 | INFO  | Task c15b47cf-1c0f-42de-b373-e0eb82491ed1 is in state STARTED 2026-02-28 01:09:51.181041 | orchestrator | 2026-02-28 01:09:51 | INFO  | Task 61b9db59-9efd-4e94-8bf1-8fbb3dd0b0f6 is in state STARTED 2026-02-28 01:09:51.181861 | orchestrator | 2026-02-28 01:09:51 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:09:51.181894 | orchestrator | 2026-02-28 01:09:51 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:09:54.217984 | orchestrator | 2026-02-28 01:09:54 | INFO  | Task cdca8b1c-1211-47b8-adec-1ec76a27efa1 is in state STARTED 2026-02-28 01:09:54.219030 | orchestrator | 2026-02-28 01:09:54 | INFO  | Task c15b47cf-1c0f-42de-b373-e0eb82491ed1 is in state STARTED 2026-02-28 01:09:54.220375 | orchestrator | 2026-02-28 01:09:54 | INFO  | Task 61b9db59-9efd-4e94-8bf1-8fbb3dd0b0f6 is in state STARTED 2026-02-28 01:09:54.221378 | orchestrator | 2026-02-28 01:09:54 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:09:54.221436 | orchestrator | 2026-02-28 01:09:54 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:09:57.263429 | orchestrator | 2026-02-28 01:09:57 | INFO  | Task cdca8b1c-1211-47b8-adec-1ec76a27efa1 is in state STARTED 2026-02-28 01:09:57.265908 | orchestrator | 2026-02-28 01:09:57 | INFO  | Task c15b47cf-1c0f-42de-b373-e0eb82491ed1 is in state STARTED 2026-02-28 01:09:57.266770 | orchestrator | 2026-02-28 01:09:57 | INFO  | Task 61b9db59-9efd-4e94-8bf1-8fbb3dd0b0f6 is in state STARTED 2026-02-28 01:09:57.268727 | orchestrator | 2026-02-28 01:09:57 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:09:57.268832 | orchestrator | 2026-02-28 01:09:57 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:10:00.321226 | orchestrator | 2026-02-28 01:10:00 | INFO  | Task cdca8b1c-1211-47b8-adec-1ec76a27efa1 is in state STARTED 2026-02-28 01:10:00.323005 | orchestrator | 2026-02-28 01:10:00 | INFO  | Task c15b47cf-1c0f-42de-b373-e0eb82491ed1 is in state STARTED 2026-02-28 01:10:00.324749 | orchestrator | 2026-02-28 01:10:00 | INFO  | Task 61b9db59-9efd-4e94-8bf1-8fbb3dd0b0f6 is in state STARTED 2026-02-28 01:10:00.326832 | orchestrator | 2026-02-28 01:10:00 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:10:00.326887 | orchestrator | 2026-02-28 01:10:00 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:10:03.366405 | orchestrator | 2026-02-28 01:10:03 | INFO  | Task cdca8b1c-1211-47b8-adec-1ec76a27efa1 is in state STARTED 2026-02-28 01:10:03.366754 | orchestrator | 2026-02-28 01:10:03 | INFO  | Task c15b47cf-1c0f-42de-b373-e0eb82491ed1 is in state STARTED 2026-02-28 01:10:03.368424 | orchestrator | 2026-02-28 01:10:03 | INFO  | Task 61b9db59-9efd-4e94-8bf1-8fbb3dd0b0f6 is in state STARTED 2026-02-28 01:10:03.370197 | orchestrator | 2026-02-28 01:10:03 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:10:03.370247 | orchestrator | 2026-02-28 01:10:03 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:10:06.422259 | orchestrator | 2026-02-28 01:10:06 | INFO  | Task cdca8b1c-1211-47b8-adec-1ec76a27efa1 is in state STARTED 2026-02-28 01:10:06.425221 | orchestrator | 2026-02-28 01:10:06 | INFO  | Task c15b47cf-1c0f-42de-b373-e0eb82491ed1 is in state STARTED 2026-02-28 01:10:06.428289 | orchestrator | 2026-02-28 01:10:06 | INFO  | Task 61b9db59-9efd-4e94-8bf1-8fbb3dd0b0f6 is in state STARTED 2026-02-28 01:10:06.430571 | orchestrator | 2026-02-28 01:10:06 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:10:06.430629 | orchestrator | 2026-02-28 01:10:06 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:10:09.476957 | orchestrator | 2026-02-28 01:10:09 | INFO  | Task cdca8b1c-1211-47b8-adec-1ec76a27efa1 is in state STARTED 2026-02-28 01:10:09.477889 | orchestrator | 2026-02-28 01:10:09 | INFO  | Task c15b47cf-1c0f-42de-b373-e0eb82491ed1 is in state STARTED 2026-02-28 01:10:09.480052 | orchestrator | 2026-02-28 01:10:09 | INFO  | Task 61b9db59-9efd-4e94-8bf1-8fbb3dd0b0f6 is in state STARTED 2026-02-28 01:10:09.482099 | orchestrator | 2026-02-28 01:10:09 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:10:09.482159 | orchestrator | 2026-02-28 01:10:09 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:10:12.531606 | orchestrator | 2026-02-28 01:10:12 | INFO  | Task cdca8b1c-1211-47b8-adec-1ec76a27efa1 is in state STARTED 2026-02-28 01:10:12.533312 | orchestrator | 2026-02-28 01:10:12 | INFO  | Task c15b47cf-1c0f-42de-b373-e0eb82491ed1 is in state STARTED 2026-02-28 01:10:12.535780 | orchestrator | 2026-02-28 01:10:12 | INFO  | Task 61b9db59-9efd-4e94-8bf1-8fbb3dd0b0f6 is in state STARTED 2026-02-28 01:10:12.538153 | orchestrator | 2026-02-28 01:10:12 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:10:12.538199 | orchestrator | 2026-02-28 01:10:12 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:10:15.584904 | orchestrator | 2026-02-28 01:10:15 | INFO  | Task cdca8b1c-1211-47b8-adec-1ec76a27efa1 is in state STARTED 2026-02-28 01:10:15.585707 | orchestrator | 2026-02-28 01:10:15 | INFO  | Task c15b47cf-1c0f-42de-b373-e0eb82491ed1 is in state STARTED 2026-02-28 01:10:15.586859 | orchestrator | 2026-02-28 01:10:15 | INFO  | Task 61b9db59-9efd-4e94-8bf1-8fbb3dd0b0f6 is in state STARTED 2026-02-28 01:10:15.589240 | orchestrator | 2026-02-28 01:10:15 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:10:15.589464 | orchestrator | 2026-02-28 01:10:15 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:10:18.632237 | orchestrator | 2026-02-28 01:10:18 | INFO  | Task cdca8b1c-1211-47b8-adec-1ec76a27efa1 is in state STARTED 2026-02-28 01:10:18.634528 | orchestrator | 2026-02-28 01:10:18 | INFO  | Task c15b47cf-1c0f-42de-b373-e0eb82491ed1 is in state STARTED 2026-02-28 01:10:18.637938 | orchestrator | 2026-02-28 01:10:18 | INFO  | Task 61b9db59-9efd-4e94-8bf1-8fbb3dd0b0f6 is in state STARTED 2026-02-28 01:10:18.639085 | orchestrator | 2026-02-28 01:10:18 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:10:18.639123 | orchestrator | 2026-02-28 01:10:18 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:10:21.675216 | orchestrator | 2026-02-28 01:10:21 | INFO  | Task cdca8b1c-1211-47b8-adec-1ec76a27efa1 is in state STARTED 2026-02-28 01:10:21.680146 | orchestrator | 2026-02-28 01:10:21 | INFO  | Task c15b47cf-1c0f-42de-b373-e0eb82491ed1 is in state STARTED 2026-02-28 01:10:21.680891 | orchestrator | 2026-02-28 01:10:21 | INFO  | Task 61b9db59-9efd-4e94-8bf1-8fbb3dd0b0f6 is in state STARTED 2026-02-28 01:10:21.683179 | orchestrator | 2026-02-28 01:10:21 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:10:21.683267 | orchestrator | 2026-02-28 01:10:21 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:10:24.718587 | orchestrator | 2026-02-28 01:10:24 | INFO  | Task cdca8b1c-1211-47b8-adec-1ec76a27efa1 is in state STARTED 2026-02-28 01:10:24.720154 | orchestrator | 2026-02-28 01:10:24 | INFO  | Task c15b47cf-1c0f-42de-b373-e0eb82491ed1 is in state STARTED 2026-02-28 01:10:24.721844 | orchestrator | 2026-02-28 01:10:24 | INFO  | Task 61b9db59-9efd-4e94-8bf1-8fbb3dd0b0f6 is in state STARTED 2026-02-28 01:10:24.724401 | orchestrator | 2026-02-28 01:10:24 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:10:24.724465 | orchestrator | 2026-02-28 01:10:24 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:10:27.766400 | orchestrator | 2026-02-28 01:10:27 | INFO  | Task cdca8b1c-1211-47b8-adec-1ec76a27efa1 is in state STARTED 2026-02-28 01:10:27.768049 | orchestrator | 2026-02-28 01:10:27 | INFO  | Task c15b47cf-1c0f-42de-b373-e0eb82491ed1 is in state STARTED 2026-02-28 01:10:27.769642 | orchestrator | 2026-02-28 01:10:27 | INFO  | Task 61b9db59-9efd-4e94-8bf1-8fbb3dd0b0f6 is in state STARTED 2026-02-28 01:10:27.771064 | orchestrator | 2026-02-28 01:10:27 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:10:27.771104 | orchestrator | 2026-02-28 01:10:27 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:10:30.862197 | orchestrator | 2026-02-28 01:10:30 | INFO  | Task cdca8b1c-1211-47b8-adec-1ec76a27efa1 is in state STARTED 2026-02-28 01:10:30.862299 | orchestrator | 2026-02-28 01:10:30 | INFO  | Task c15b47cf-1c0f-42de-b373-e0eb82491ed1 is in state STARTED 2026-02-28 01:10:30.862315 | orchestrator | 2026-02-28 01:10:30 | INFO  | Task 61b9db59-9efd-4e94-8bf1-8fbb3dd0b0f6 is in state STARTED 2026-02-28 01:10:30.862326 | orchestrator | 2026-02-28 01:10:30 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:10:30.862338 | orchestrator | 2026-02-28 01:10:30 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:10:33.859179 | orchestrator | 2026-02-28 01:10:33 | INFO  | Task cdca8b1c-1211-47b8-adec-1ec76a27efa1 is in state STARTED 2026-02-28 01:10:33.861201 | orchestrator | 2026-02-28 01:10:33 | INFO  | Task c15b47cf-1c0f-42de-b373-e0eb82491ed1 is in state STARTED 2026-02-28 01:10:33.863366 | orchestrator | 2026-02-28 01:10:33 | INFO  | Task 61b9db59-9efd-4e94-8bf1-8fbb3dd0b0f6 is in state STARTED 2026-02-28 01:10:33.865540 | orchestrator | 2026-02-28 01:10:33 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:10:33.865604 | orchestrator | 2026-02-28 01:10:33 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:10:36.893218 | orchestrator | 2026-02-28 01:10:36 | INFO  | Task cdca8b1c-1211-47b8-adec-1ec76a27efa1 is in state STARTED 2026-02-28 01:10:36.893775 | orchestrator | 2026-02-28 01:10:36 | INFO  | Task c15b47cf-1c0f-42de-b373-e0eb82491ed1 is in state STARTED 2026-02-28 01:10:36.895734 | orchestrator | 2026-02-28 01:10:36 | INFO  | Task 61b9db59-9efd-4e94-8bf1-8fbb3dd0b0f6 is in state STARTED 2026-02-28 01:10:36.896424 | orchestrator | 2026-02-28 01:10:36 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:10:36.896548 | orchestrator | 2026-02-28 01:10:36 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:10:39.924273 | orchestrator | 2026-02-28 01:10:39 | INFO  | Task cdca8b1c-1211-47b8-adec-1ec76a27efa1 is in state SUCCESS 2026-02-28 01:10:39.924343 | orchestrator | 2026-02-28 01:10:39 | INFO  | Task c15b47cf-1c0f-42de-b373-e0eb82491ed1 is in state STARTED 2026-02-28 01:10:39.926751 | orchestrator | 2026-02-28 01:10:39 | INFO  | Task 61b9db59-9efd-4e94-8bf1-8fbb3dd0b0f6 is in state STARTED 2026-02-28 01:10:39.934198 | orchestrator | 2026-02-28 01:10:39 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:10:39.935177 | orchestrator | 2026-02-28 01:10:39 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:10:43.042321 | orchestrator | 2026-02-28 01:10:43 | INFO  | Task c15b47cf-1c0f-42de-b373-e0eb82491ed1 is in state STARTED 2026-02-28 01:10:43.043675 | orchestrator | 2026-02-28 01:10:43 | INFO  | Task 61b9db59-9efd-4e94-8bf1-8fbb3dd0b0f6 is in state STARTED 2026-02-28 01:10:43.044863 | orchestrator | 2026-02-28 01:10:43 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:10:43.046011 | orchestrator | 2026-02-28 01:10:43 | INFO  | Task 38bfc1a7-6f10-43be-84c9-9e685e24f608 is in state STARTED 2026-02-28 01:10:43.046069 | orchestrator | 2026-02-28 01:10:43 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:10:46.085032 | orchestrator | 2026-02-28 01:10:46 | INFO  | Task c15b47cf-1c0f-42de-b373-e0eb82491ed1 is in state STARTED 2026-02-28 01:10:46.088243 | orchestrator | 2026-02-28 01:10:46 | INFO  | Task 61b9db59-9efd-4e94-8bf1-8fbb3dd0b0f6 is in state STARTED 2026-02-28 01:10:46.090918 | orchestrator | 2026-02-28 01:10:46 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:10:46.090952 | orchestrator | 2026-02-28 01:10:46 | INFO  | Task 38bfc1a7-6f10-43be-84c9-9e685e24f608 is in state STARTED 2026-02-28 01:10:46.091172 | orchestrator | 2026-02-28 01:10:46 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:10:49.139774 | orchestrator | 2026-02-28 01:10:49 | INFO  | Task c15b47cf-1c0f-42de-b373-e0eb82491ed1 is in state STARTED 2026-02-28 01:10:49.139924 | orchestrator | 2026-02-28 01:10:49 | INFO  | Task 61b9db59-9efd-4e94-8bf1-8fbb3dd0b0f6 is in state STARTED 2026-02-28 01:10:49.141737 | orchestrator | 2026-02-28 01:10:49 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:10:49.142552 | orchestrator | 2026-02-28 01:10:49 | INFO  | Task 38bfc1a7-6f10-43be-84c9-9e685e24f608 is in state STARTED 2026-02-28 01:10:49.142618 | orchestrator | 2026-02-28 01:10:49 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:10:52.192578 | orchestrator | 2026-02-28 01:10:52 | INFO  | Task c15b47cf-1c0f-42de-b373-e0eb82491ed1 is in state STARTED 2026-02-28 01:10:52.193514 | orchestrator | 2026-02-28 01:10:52 | INFO  | Task 61b9db59-9efd-4e94-8bf1-8fbb3dd0b0f6 is in state STARTED 2026-02-28 01:10:52.195916 | orchestrator | 2026-02-28 01:10:52 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:10:52.197336 | orchestrator | 2026-02-28 01:10:52 | INFO  | Task 38bfc1a7-6f10-43be-84c9-9e685e24f608 is in state STARTED 2026-02-28 01:10:52.197385 | orchestrator | 2026-02-28 01:10:52 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:10:55.296549 | orchestrator | 2026-02-28 01:10:55 | INFO  | Task c15b47cf-1c0f-42de-b373-e0eb82491ed1 is in state STARTED 2026-02-28 01:10:55.300067 | orchestrator | 2026-02-28 01:10:55 | INFO  | Task 61b9db59-9efd-4e94-8bf1-8fbb3dd0b0f6 is in state STARTED 2026-02-28 01:10:55.302910 | orchestrator | 2026-02-28 01:10:55 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:10:55.305120 | orchestrator | 2026-02-28 01:10:55 | INFO  | Task 38bfc1a7-6f10-43be-84c9-9e685e24f608 is in state STARTED 2026-02-28 01:10:55.305177 | orchestrator | 2026-02-28 01:10:55 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:10:58.349471 | orchestrator | 2026-02-28 01:10:58 | INFO  | Task c15b47cf-1c0f-42de-b373-e0eb82491ed1 is in state STARTED 2026-02-28 01:10:58.351225 | orchestrator | 2026-02-28 01:10:58 | INFO  | Task 61b9db59-9efd-4e94-8bf1-8fbb3dd0b0f6 is in state STARTED 2026-02-28 01:10:58.352839 | orchestrator | 2026-02-28 01:10:58 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:10:58.354262 | orchestrator | 2026-02-28 01:10:58 | INFO  | Task 38bfc1a7-6f10-43be-84c9-9e685e24f608 is in state STARTED 2026-02-28 01:10:58.354572 | orchestrator | 2026-02-28 01:10:58 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:11:01.406574 | orchestrator | 2026-02-28 01:11:01 | INFO  | Task c15b47cf-1c0f-42de-b373-e0eb82491ed1 is in state STARTED 2026-02-28 01:11:01.406873 | orchestrator | 2026-02-28 01:11:01 | INFO  | Task 61b9db59-9efd-4e94-8bf1-8fbb3dd0b0f6 is in state STARTED 2026-02-28 01:11:01.407976 | orchestrator | 2026-02-28 01:11:01 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:11:01.409230 | orchestrator | 2026-02-28 01:11:01 | INFO  | Task 38bfc1a7-6f10-43be-84c9-9e685e24f608 is in state STARTED 2026-02-28 01:11:01.409264 | orchestrator | 2026-02-28 01:11:01 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:11:04.453857 | orchestrator | 2026-02-28 01:11:04 | INFO  | Task c15b47cf-1c0f-42de-b373-e0eb82491ed1 is in state STARTED 2026-02-28 01:11:04.454475 | orchestrator | 2026-02-28 01:11:04 | INFO  | Task 61b9db59-9efd-4e94-8bf1-8fbb3dd0b0f6 is in state STARTED 2026-02-28 01:11:04.455912 | orchestrator | 2026-02-28 01:11:04 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:11:04.456755 | orchestrator | 2026-02-28 01:11:04 | INFO  | Task 38bfc1a7-6f10-43be-84c9-9e685e24f608 is in state STARTED 2026-02-28 01:11:04.456807 | orchestrator | 2026-02-28 01:11:04 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:11:07.498974 | orchestrator | 2026-02-28 01:11:07 | INFO  | Task c15b47cf-1c0f-42de-b373-e0eb82491ed1 is in state STARTED 2026-02-28 01:11:07.502119 | orchestrator | 2026-02-28 01:11:07 | INFO  | Task 61b9db59-9efd-4e94-8bf1-8fbb3dd0b0f6 is in state STARTED 2026-02-28 01:11:07.502936 | orchestrator | 2026-02-28 01:11:07 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:11:07.503752 | orchestrator | 2026-02-28 01:11:07 | INFO  | Task 38bfc1a7-6f10-43be-84c9-9e685e24f608 is in state STARTED 2026-02-28 01:11:07.503883 | orchestrator | 2026-02-28 01:11:07 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:11:10.558292 | orchestrator | 2026-02-28 01:11:10 | INFO  | Task c15b47cf-1c0f-42de-b373-e0eb82491ed1 is in state STARTED 2026-02-28 01:11:10.560426 | orchestrator | 2026-02-28 01:11:10 | INFO  | Task 61b9db59-9efd-4e94-8bf1-8fbb3dd0b0f6 is in state SUCCESS 2026-02-28 01:11:10.562938 | orchestrator | 2026-02-28 01:11:10.563005 | orchestrator | 2026-02-28 01:11:10.563012 | orchestrator | PLAY [Download ironic ipa images] ********************************************** 2026-02-28 01:11:10.563018 | orchestrator | 2026-02-28 01:11:10.563022 | orchestrator | TASK [Ensure the destination directory exists] ********************************* 2026-02-28 01:11:10.563028 | orchestrator | Saturday 28 February 2026 01:09:41 +0000 (0:00:00.123) 0:00:00.123 ***** 2026-02-28 01:11:10.563032 | orchestrator | changed: [localhost] 2026-02-28 01:11:10.563037 | orchestrator | 2026-02-28 01:11:10.563054 | orchestrator | TASK [Download ironic-agent initramfs] ***************************************** 2026-02-28 01:11:10.563074 | orchestrator | Saturday 28 February 2026 01:09:42 +0000 (0:00:00.992) 0:00:01.116 ***** 2026-02-28 01:11:10.563078 | orchestrator | changed: [localhost] 2026-02-28 01:11:10.563082 | orchestrator | 2026-02-28 01:11:10.563085 | orchestrator | TASK [Download ironic-agent kernel] ******************************************** 2026-02-28 01:11:10.563089 | orchestrator | Saturday 28 February 2026 01:10:09 +0000 (0:00:27.044) 0:00:28.160 ***** 2026-02-28 01:11:10.563093 | orchestrator | FAILED - RETRYING: [localhost]: Download ironic-agent kernel (3 retries left). 2026-02-28 01:11:10.563098 | orchestrator | changed: [localhost] 2026-02-28 01:11:10.563101 | orchestrator | 2026-02-28 01:11:10.563105 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-28 01:11:10.563109 | orchestrator | 2026-02-28 01:11:10.563114 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-28 01:11:10.563117 | orchestrator | Saturday 28 February 2026 01:10:38 +0000 (0:00:28.695) 0:00:56.856 ***** 2026-02-28 01:11:10.563121 | orchestrator | ok: [testbed-node-0] 2026-02-28 01:11:10.563125 | orchestrator | ok: [testbed-node-1] 2026-02-28 01:11:10.563129 | orchestrator | ok: [testbed-node-2] 2026-02-28 01:11:10.563133 | orchestrator | 2026-02-28 01:11:10.563137 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-28 01:11:10.563140 | orchestrator | Saturday 28 February 2026 01:10:38 +0000 (0:00:00.313) 0:00:57.170 ***** 2026-02-28 01:11:10.563144 | orchestrator | ok: [testbed-node-0] => (item=enable_ironic_False) 2026-02-28 01:11:10.563149 | orchestrator | ok: [testbed-node-1] => (item=enable_ironic_False) 2026-02-28 01:11:10.563153 | orchestrator | ok: [testbed-node-2] => (item=enable_ironic_False) 2026-02-28 01:11:10.563157 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: enable_ironic_True 2026-02-28 01:11:10.563160 | orchestrator | 2026-02-28 01:11:10.563164 | orchestrator | PLAY [Apply role ironic] ******************************************************* 2026-02-28 01:11:10.563168 | orchestrator | skipping: no hosts matched 2026-02-28 01:11:10.563174 | orchestrator | 2026-02-28 01:11:10.563177 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-28 01:11:10.563181 | orchestrator | localhost : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-28 01:11:10.563188 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-28 01:11:10.563194 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-28 01:11:10.563197 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-28 01:11:10.563201 | orchestrator | 2026-02-28 01:11:10.563205 | orchestrator | 2026-02-28 01:11:10.563209 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-28 01:11:10.563213 | orchestrator | Saturday 28 February 2026 01:10:39 +0000 (0:00:00.633) 0:00:57.803 ***** 2026-02-28 01:11:10.563217 | orchestrator | =============================================================================== 2026-02-28 01:11:10.563220 | orchestrator | Download ironic-agent kernel ------------------------------------------- 28.70s 2026-02-28 01:11:10.563224 | orchestrator | Download ironic-agent initramfs ---------------------------------------- 27.04s 2026-02-28 01:11:10.563228 | orchestrator | Ensure the destination directory exists --------------------------------- 0.99s 2026-02-28 01:11:10.563232 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.63s 2026-02-28 01:11:10.563236 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.31s 2026-02-28 01:11:10.563240 | orchestrator | 2026-02-28 01:11:10.563244 | orchestrator | 2026-02-28 01:11:10.563248 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-28 01:11:10.563252 | orchestrator | 2026-02-28 01:11:10.563256 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-28 01:11:10.563265 | orchestrator | Saturday 28 February 2026 01:07:41 +0000 (0:00:00.579) 0:00:00.579 ***** 2026-02-28 01:11:10.563269 | orchestrator | ok: [testbed-node-0] 2026-02-28 01:11:10.563272 | orchestrator | ok: [testbed-node-1] 2026-02-28 01:11:10.563276 | orchestrator | ok: [testbed-node-2] 2026-02-28 01:11:10.563280 | orchestrator | 2026-02-28 01:11:10.563284 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-28 01:11:10.563288 | orchestrator | Saturday 28 February 2026 01:07:42 +0000 (0:00:00.735) 0:00:01.314 ***** 2026-02-28 01:11:10.563292 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2026-02-28 01:11:10.563296 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2026-02-28 01:11:10.563300 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2026-02-28 01:11:10.563303 | orchestrator | 2026-02-28 01:11:10.563307 | orchestrator | PLAY [Apply role designate] **************************************************** 2026-02-28 01:11:10.563311 | orchestrator | 2026-02-28 01:11:10.563315 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-02-28 01:11:10.563319 | orchestrator | Saturday 28 February 2026 01:07:43 +0000 (0:00:00.583) 0:00:01.897 ***** 2026-02-28 01:11:10.563323 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 01:11:10.563327 | orchestrator | 2026-02-28 01:11:10.563330 | orchestrator | TASK [service-ks-register : designate | Creating/deleting services] ************ 2026-02-28 01:11:10.563484 | orchestrator | Saturday 28 February 2026 01:07:43 +0000 (0:00:00.657) 0:00:02.554 ***** 2026-02-28 01:11:10.563491 | orchestrator | changed: [testbed-node-0] => (item=designate (dns)) 2026-02-28 01:11:10.563498 | orchestrator | 2026-02-28 01:11:10.563504 | orchestrator | TASK [service-ks-register : designate | Creating/deleting endpoints] *********** 2026-02-28 01:11:10.563510 | orchestrator | Saturday 28 February 2026 01:07:47 +0000 (0:00:03.535) 0:00:06.090 ***** 2026-02-28 01:11:10.563522 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api-int.testbed.osism.xyz:9001 -> internal) 2026-02-28 01:11:10.563530 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api.testbed.osism.xyz:9001 -> public) 2026-02-28 01:11:10.563536 | orchestrator | 2026-02-28 01:11:10.563543 | orchestrator | TASK [service-ks-register : designate | Creating projects] ********************* 2026-02-28 01:11:10.563550 | orchestrator | Saturday 28 February 2026 01:07:53 +0000 (0:00:06.365) 0:00:12.455 ***** 2026-02-28 01:11:10.563558 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-28 01:11:10.563565 | orchestrator | 2026-02-28 01:11:10.563591 | orchestrator | TASK [service-ks-register : designate | Creating users] ************************ 2026-02-28 01:11:10.563596 | orchestrator | Saturday 28 February 2026 01:07:57 +0000 (0:00:03.582) 0:00:16.038 ***** 2026-02-28 01:11:10.563600 | orchestrator | changed: [testbed-node-0] => (item=designate -> service) 2026-02-28 01:11:10.563605 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-28 01:11:10.563609 | orchestrator | 2026-02-28 01:11:10.563613 | orchestrator | TASK [service-ks-register : designate | Creating roles] ************************ 2026-02-28 01:11:10.563617 | orchestrator | Saturday 28 February 2026 01:08:02 +0000 (0:00:04.847) 0:00:20.885 ***** 2026-02-28 01:11:10.563622 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-28 01:11:10.563628 | orchestrator | 2026-02-28 01:11:10.563698 | orchestrator | TASK [service-ks-register : designate | Granting/revoking user roles] ********** 2026-02-28 01:11:10.563707 | orchestrator | Saturday 28 February 2026 01:08:05 +0000 (0:00:03.754) 0:00:24.640 ***** 2026-02-28 01:11:10.563714 | orchestrator | changed: [testbed-node-0] => (item=designate -> service -> admin) 2026-02-28 01:11:10.563720 | orchestrator | 2026-02-28 01:11:10.563726 | orchestrator | TASK [designate : Ensuring config directories exist] *************************** 2026-02-28 01:11:10.563731 | orchestrator | Saturday 28 February 2026 01:08:10 +0000 (0:00:04.128) 0:00:28.769 ***** 2026-02-28 01:11:10.563742 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-28 01:11:10.563761 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-28 01:11:10.563784 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-28 01:11:10.563792 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-28 01:11:10.563800 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-28 01:11:10.563808 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-28 01:11:10.563813 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-28 01:11:10.563818 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-28 01:11:10.563826 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-28 01:11:10.563833 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-28 01:11:10.563839 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-28 01:11:10.563843 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-28 01:11:10.563850 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-28 01:11:10.563855 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-28 01:11:10.563859 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-28 01:11:10.563877 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-28 01:11:10.563882 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-28 01:11:10.563886 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-28 01:11:10.563894 | orchestrator | 2026-02-28 01:11:10.563898 | orchestrator | TASK [designate : Check if policies shall be overwritten] ********************** 2026-02-28 01:11:10.563902 | orchestrator | Saturday 28 February 2026 01:08:15 +0000 (0:00:05.078) 0:00:33.847 ***** 2026-02-28 01:11:10.563906 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:11:10.563912 | orchestrator | 2026-02-28 01:11:10.563918 | orchestrator | TASK [designate : Set designate policy file] *********************************** 2026-02-28 01:11:10.563923 | orchestrator | Saturday 28 February 2026 01:08:15 +0000 (0:00:00.414) 0:00:34.262 ***** 2026-02-28 01:11:10.563932 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:11:10.563941 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:11:10.563969 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:11:10.563983 | orchestrator | 2026-02-28 01:11:10.563989 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-02-28 01:11:10.563996 | orchestrator | Saturday 28 February 2026 01:08:16 +0000 (0:00:00.898) 0:00:35.161 ***** 2026-02-28 01:11:10.564002 | orchestrator | included: /ansible/roles/designate/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 01:11:10.564009 | orchestrator | 2026-02-28 01:11:10.564016 | orchestrator | TASK [service-cert-copy : designate | Copying over extra CA certificates] ****** 2026-02-28 01:11:10.564032 | orchestrator | Saturday 28 February 2026 01:08:17 +0000 (0:00:01.484) 0:00:36.646 ***** 2026-02-28 01:11:10.564040 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-28 01:11:10.564054 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-28 01:11:10.564070 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-28 01:11:10.564084 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-28 01:11:10.564090 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-28 01:11:10.564096 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-28 01:11:10.564102 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-28 01:11:10.564118 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-28 01:11:10.564126 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-28 01:11:10.564138 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-28 01:11:10.564144 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-28 01:11:10.564152 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-28 01:11:10.564156 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-28 01:11:10.564160 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-28 01:11:10.564171 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-28 01:11:10.564180 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-28 01:11:10.564184 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-28 01:11:10.564188 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-28 01:11:10.564192 | orchestrator | 2026-02-28 01:11:10.564196 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS certificate] *** 2026-02-28 01:11:10.564200 | orchestrator | Saturday 28 February 2026 01:08:25 +0000 (0:00:07.777) 0:00:44.423 ***** 2026-02-28 01:11:10.564204 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-02-28 01:11:10.564578 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-28 01:11:10.564604 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-28 01:11:10.564608 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-28 01:11:10.564613 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-28 01:11:10.564618 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-02-28 01:11:10.564622 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-28 01:11:10.564626 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:11:10.564664 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-02-28 01:11:10.564677 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-28 01:11:10.564688 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-28 01:11:10.564697 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-28 01:11:10.564702 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-28 01:11:10.564709 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-28 01:11:10.564721 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-28 01:11:10.564737 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-28 01:11:10.564744 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-28 01:11:10.564750 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-28 01:11:10.564756 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:11:10.564778 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-28 01:11:10.564784 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:11:10.564789 | orchestrator | 2026-02-28 01:11:10.564795 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS key] *** 2026-02-28 01:11:10.564801 | orchestrator | Saturday 28 February 2026 01:08:29 +0000 (0:00:03.748) 0:00:48.172 ***** 2026-02-28 01:11:10.564807 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-02-28 01:11:10.564824 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-28 01:11:10.564838 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-28 01:11:10.564846 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-02-28 01:11:10.564852 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-28 01:11:10.564858 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-02-28 01:11:10.564868 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-28 01:11:10.564882 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-28 01:11:10.564889 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-28 01:11:10.564924 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-28 01:11:10.564931 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:11:10.564938 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-28 01:11:10.564945 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-28 01:11:10.564954 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-28 01:11:10.564964 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-28 01:11:10.564973 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-28 01:11:10.564980 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-28 01:11:10.564986 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-28 01:11:10.564992 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:11:10.564999 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-28 01:11:10.565003 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:11:10.565007 | orchestrator | 2026-02-28 01:11:10.565011 | orchestrator | TASK [designate : Copying over config.json files for services] ***************** 2026-02-28 01:11:10.565019 | orchestrator | Saturday 28 February 2026 01:08:32 +0000 (0:00:03.446) 0:00:51.619 ***** 2026-02-28 01:11:10.565023 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-28 01:11:10.565035 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-28 01:11:10.565039 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-28 01:11:10.565043 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-28 01:11:10.565048 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-28 01:11:10.565058 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-28 01:11:10.565065 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-28 01:11:10.565079 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-28 01:11:10.565083 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-28 01:11:10.565087 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-28 01:11:10.565091 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-28 01:11:10.565099 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-28 01:11:10.565103 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-28 01:11:10.565114 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-28 01:11:10.565120 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-28 01:11:10.565127 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-28 01:11:10.565133 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-28 01:11:10.565138 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-28 01:11:10.565150 | orchestrator | 2026-02-28 01:11:10.565160 | orchestrator | TASK [designate : Copying over designate.conf] ********************************* 2026-02-28 01:11:10.565168 | orchestrator | Saturday 28 February 2026 01:08:40 +0000 (0:00:07.101) 0:00:58.720 ***** 2026-02-28 01:11:10.565174 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-28 01:11:10.565190 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-28 01:11:10.565196 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-28 01:11:10.565203 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-28 01:11:10.565215 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-28 01:11:10.565221 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-28 01:11:10.565236 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-28 01:11:10.565243 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-28 01:11:10.565250 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-28 01:11:10.565257 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-28 01:11:10.565271 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-28 01:11:10.565279 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-28 01:11:10.565285 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-28 01:11:10.565299 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-28 01:11:10.565306 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-28 01:11:10.565310 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-28 01:11:10.565315 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-28 01:11:10.565325 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-28 01:11:10.565329 | orchestrator | 2026-02-28 01:11:10.565334 | orchestrator | TASK [designate : Copying over pools.yaml] ************************************* 2026-02-28 01:11:10.565338 | orchestrator | Saturday 28 February 2026 01:09:04 +0000 (0:00:24.505) 0:01:23.225 ***** 2026-02-28 01:11:10.565343 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-02-28 01:11:10.565347 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-02-28 01:11:10.565352 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-02-28 01:11:10.565356 | orchestrator | 2026-02-28 01:11:10.565360 | orchestrator | TASK [designate : Copying over named.conf] ************************************* 2026-02-28 01:11:10.565366 | orchestrator | Saturday 28 February 2026 01:09:14 +0000 (0:00:10.390) 0:01:33.615 ***** 2026-02-28 01:11:10.565373 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-02-28 01:11:10.565379 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-02-28 01:11:10.565384 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-02-28 01:11:10.565390 | orchestrator | 2026-02-28 01:11:10.565395 | orchestrator | TASK [designate : Copying over rndc.conf] ************************************** 2026-02-28 01:11:10.565401 | orchestrator | Saturday 28 February 2026 01:09:19 +0000 (0:00:04.233) 0:01:37.848 ***** 2026-02-28 01:11:10.565415 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-02-28 01:11:10.565422 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-02-28 01:11:10.565433 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-02-28 01:11:10.565439 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-28 01:11:10.565445 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-28 01:11:10.565459 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-28 01:11:10.565465 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-28 01:11:10.565475 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-28 01:11:10.565481 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-28 01:11:10.565487 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-28 01:11:10.565492 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-28 01:11:10.565504 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-28 01:11:10.565513 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-28 01:11:10.565524 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-28 01:11:10.565531 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-28 01:11:10.565537 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-28 01:11:10.565544 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-28 01:11:10.565549 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-28 01:11:10.565555 | orchestrator | 2026-02-28 01:11:10.565561 | orchestrator | TASK [designate : Copying over rndc.key] *************************************** 2026-02-28 01:11:10.565568 | orchestrator | Saturday 28 February 2026 01:09:23 +0000 (0:00:03.927) 0:01:41.776 ***** 2026-02-28 01:11:10.565580 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-02-28 01:11:10.565593 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-02-28 01:11:10.565599 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-02-28 01:11:10.565820 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-28 01:11:10.565833 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-28 01:11:10.565841 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-28 01:11:10.565852 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-28 01:11:10.565856 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-28 01:11:10.565860 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-28 01:11:10.565869 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-28 01:11:10.565874 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-28 01:11:10.565880 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-28 01:11:10.565888 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-28 01:11:10.565892 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-28 01:11:10.565896 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-28 01:11:10.565900 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-28 01:11:10.565909 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-28 01:11:10.565913 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-28 01:11:10.565917 | orchestrator | 2026-02-28 01:11:10.565921 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-02-28 01:11:10.565925 | orchestrator | Saturday 28 February 2026 01:09:26 +0000 (0:00:03.015) 0:01:44.792 ***** 2026-02-28 01:11:10.565935 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:11:10.565939 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:11:10.565943 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:11:10.565947 | orchestrator | 2026-02-28 01:11:10.565951 | orchestrator | TASK [designate : Copying over existing policy file] *************************** 2026-02-28 01:11:10.565957 | orchestrator | Saturday 28 February 2026 01:09:27 +0000 (0:00:01.454) 0:01:46.247 ***** 2026-02-28 01:11:10.565962 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-02-28 01:11:10.565966 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-28 01:11:10.565970 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-28 01:11:10.565979 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-28 01:11:10.565983 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-28 01:11:10.565994 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-28 01:11:10.565998 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-02-28 01:11:10.566002 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:11:10.566006 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-28 01:11:10.566010 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-28 01:11:10.566053 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-28 01:11:10.566059 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-28 01:11:10.566070 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-28 01:11:10.566074 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:11:10.566078 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-02-28 01:11:10.566083 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-28 01:11:10.566087 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-28 01:11:10.566094 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-28 01:11:10.566098 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-28 01:11:10.566108 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-28 01:11:10.566113 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:11:10.566117 | orchestrator | 2026-02-28 01:11:10.566120 | orchestrator | TASK [service-check-containers : designate | Check containers] ***************** 2026-02-28 01:11:10.566124 | orchestrator | Saturday 28 February 2026 01:09:28 +0000 (0:00:01.143) 0:01:47.390 ***** 2026-02-28 01:11:10.566129 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-28 01:11:10.566133 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-28 01:11:10.566140 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-28 01:11:10.566148 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-28 01:11:10.566154 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-28 01:11:10.566159 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-28 01:11:10.566163 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-28 01:11:10.566167 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-28 01:11:10.566173 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-28 01:11:10.566181 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-28 01:11:10.566185 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-28 01:11:10.566191 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-28 01:11:10.566196 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-28 01:11:10.566200 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-28 01:11:10.566204 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-28 01:11:10.566210 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-28 01:11:10.566218 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-28 01:11:10.566225 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-28 01:11:10.566230 | orchestrator | 2026-02-28 01:11:10.566234 | orchestrator | TASK [service-check-containers : designate | Notify handlers to restart containers] *** 2026-02-28 01:11:10.566238 | orchestrator | Saturday 28 February 2026 01:09:33 +0000 (0:00:05.252) 0:01:52.643 ***** 2026-02-28 01:11:10.566242 | orchestrator | changed: [testbed-node-0] => { 2026-02-28 01:11:10.566246 | orchestrator |  "msg": "Notifying handlers" 2026-02-28 01:11:10.566249 | orchestrator | } 2026-02-28 01:11:10.566254 | orchestrator | changed: [testbed-node-1] => { 2026-02-28 01:11:10.566258 | orchestrator |  "msg": "Notifying handlers" 2026-02-28 01:11:10.566262 | orchestrator | } 2026-02-28 01:11:10.566265 | orchestrator | changed: [testbed-node-2] => { 2026-02-28 01:11:10.566269 | orchestrator |  "msg": "Notifying handlers" 2026-02-28 01:11:10.566273 | orchestrator | } 2026-02-28 01:11:10.566278 | orchestrator | 2026-02-28 01:11:10.566281 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-28 01:11:10.566285 | orchestrator | Saturday 28 February 2026 01:09:34 +0000 (0:00:00.542) 0:01:53.186 ***** 2026-02-28 01:11:10.566289 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-02-28 01:11:10.566294 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-28 01:11:10.566303 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-28 01:11:10.566308 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-28 01:11:10.566316 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-28 01:11:10.566320 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-28 01:11:10.566325 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:11:10.566329 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-02-28 01:11:10.566336 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-28 01:11:10.566344 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-28 01:11:10.566348 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-28 01:11:10.566355 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-28 01:11:10.566360 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-28 01:11:10.566364 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:11:10.566368 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-02-28 01:11:10.566374 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-28 01:11:10.566382 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-28 01:11:10.566386 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-28 01:11:10.566393 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-28 01:11:10.566397 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-28 01:11:10.566402 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:11:10.566405 | orchestrator | 2026-02-28 01:11:10.566409 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-02-28 01:11:10.566414 | orchestrator | Saturday 28 February 2026 01:09:37 +0000 (0:00:03.085) 0:01:56.271 ***** 2026-02-28 01:11:10.566417 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:11:10.566421 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:11:10.566425 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:11:10.566429 | orchestrator | 2026-02-28 01:11:10.566433 | orchestrator | TASK [designate : Creating Designate databases] ******************************** 2026-02-28 01:11:10.566444 | orchestrator | Saturday 28 February 2026 01:09:38 +0000 (0:00:00.408) 0:01:56.679 ***** 2026-02-28 01:11:10.566450 | orchestrator | changed: [testbed-node-0] => (item=designate) 2026-02-28 01:11:10.566456 | orchestrator | 2026-02-28 01:11:10.566461 | orchestrator | TASK [designate : Creating Designate databases user and setting permissions] *** 2026-02-28 01:11:10.566467 | orchestrator | Saturday 28 February 2026 01:09:40 +0000 (0:00:02.399) 0:01:59.079 ***** 2026-02-28 01:11:10.566472 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-28 01:11:10.566478 | orchestrator | changed: [testbed-node-0 -> {{ groups['designate-central'][0] }}] 2026-02-28 01:11:10.566485 | orchestrator | 2026-02-28 01:11:10.566490 | orchestrator | TASK [designate : Running Designate bootstrap container] *********************** 2026-02-28 01:11:10.566497 | orchestrator | Saturday 28 February 2026 01:09:42 +0000 (0:00:02.552) 0:02:01.632 ***** 2026-02-28 01:11:10.566503 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:11:10.566509 | orchestrator | 2026-02-28 01:11:10.566516 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-02-28 01:11:10.566522 | orchestrator | Saturday 28 February 2026 01:10:00 +0000 (0:00:17.742) 0:02:19.374 ***** 2026-02-28 01:11:10.566529 | orchestrator | 2026-02-28 01:11:10.566535 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-02-28 01:11:10.566542 | orchestrator | Saturday 28 February 2026 01:10:00 +0000 (0:00:00.080) 0:02:19.455 ***** 2026-02-28 01:11:10.566549 | orchestrator | 2026-02-28 01:11:10.566554 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-02-28 01:11:10.566558 | orchestrator | Saturday 28 February 2026 01:10:00 +0000 (0:00:00.071) 0:02:19.526 ***** 2026-02-28 01:11:10.566563 | orchestrator | 2026-02-28 01:11:10.566567 | orchestrator | RUNNING HANDLER [designate : Restart designate-backend-bind9 container] ******** 2026-02-28 01:11:10.566571 | orchestrator | Saturday 28 February 2026 01:10:00 +0000 (0:00:00.080) 0:02:19.607 ***** 2026-02-28 01:11:10.566576 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:11:10.566580 | orchestrator | changed: [testbed-node-1] 2026-02-28 01:11:10.566585 | orchestrator | changed: [testbed-node-2] 2026-02-28 01:11:10.566589 | orchestrator | 2026-02-28 01:11:10.566597 | orchestrator | RUNNING HANDLER [designate : Restart designate-api container] ****************** 2026-02-28 01:11:10.566601 | orchestrator | Saturday 28 February 2026 01:10:14 +0000 (0:00:14.040) 0:02:33.648 ***** 2026-02-28 01:11:10.566607 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:11:10.566612 | orchestrator | changed: [testbed-node-2] 2026-02-28 01:11:10.566619 | orchestrator | changed: [testbed-node-1] 2026-02-28 01:11:10.566626 | orchestrator | 2026-02-28 01:11:10.566631 | orchestrator | RUNNING HANDLER [designate : Restart designate-central container] ************** 2026-02-28 01:11:10.566652 | orchestrator | Saturday 28 February 2026 01:10:21 +0000 (0:00:06.342) 0:02:39.990 ***** 2026-02-28 01:11:10.566657 | orchestrator | changed: [testbed-node-1] 2026-02-28 01:11:10.566661 | orchestrator | changed: [testbed-node-2] 2026-02-28 01:11:10.566666 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:11:10.566670 | orchestrator | 2026-02-28 01:11:10.566674 | orchestrator | RUNNING HANDLER [designate : Restart designate-producer container] ************* 2026-02-28 01:11:10.566679 | orchestrator | Saturday 28 February 2026 01:10:30 +0000 (0:00:09.040) 0:02:49.031 ***** 2026-02-28 01:11:10.566683 | orchestrator | changed: [testbed-node-1] 2026-02-28 01:11:10.566688 | orchestrator | changed: [testbed-node-2] 2026-02-28 01:11:10.566692 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:11:10.566696 | orchestrator | 2026-02-28 01:11:10.566701 | orchestrator | RUNNING HANDLER [designate : Restart designate-mdns container] ***************** 2026-02-28 01:11:10.566705 | orchestrator | Saturday 28 February 2026 01:10:41 +0000 (0:00:11.266) 0:03:00.297 ***** 2026-02-28 01:11:10.566709 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:11:10.566714 | orchestrator | changed: [testbed-node-1] 2026-02-28 01:11:10.566718 | orchestrator | changed: [testbed-node-2] 2026-02-28 01:11:10.566722 | orchestrator | 2026-02-28 01:11:10.566727 | orchestrator | RUNNING HANDLER [designate : Restart designate-worker container] *************** 2026-02-28 01:11:10.566736 | orchestrator | Saturday 28 February 2026 01:10:54 +0000 (0:00:12.806) 0:03:13.103 ***** 2026-02-28 01:11:10.566740 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:11:10.566745 | orchestrator | changed: [testbed-node-2] 2026-02-28 01:11:10.566749 | orchestrator | changed: [testbed-node-1] 2026-02-28 01:11:10.566753 | orchestrator | 2026-02-28 01:11:10.566758 | orchestrator | TASK [designate : Non-destructive DNS pools update] **************************** 2026-02-28 01:11:10.566766 | orchestrator | Saturday 28 February 2026 01:11:01 +0000 (0:00:06.749) 0:03:19.853 ***** 2026-02-28 01:11:10.566771 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:11:10.566775 | orchestrator | 2026-02-28 01:11:10.566780 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-28 01:11:10.566784 | orchestrator | testbed-node-0 : ok=30  changed=24  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-02-28 01:11:10.566790 | orchestrator | testbed-node-1 : ok=20  changed=16  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-28 01:11:10.566794 | orchestrator | testbed-node-2 : ok=20  changed=16  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-28 01:11:10.566799 | orchestrator | 2026-02-28 01:11:10.566803 | orchestrator | 2026-02-28 01:11:10.566807 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-28 01:11:10.566812 | orchestrator | Saturday 28 February 2026 01:11:08 +0000 (0:00:07.777) 0:03:27.630 ***** 2026-02-28 01:11:10.566816 | orchestrator | =============================================================================== 2026-02-28 01:11:10.566821 | orchestrator | designate : Copying over designate.conf -------------------------------- 24.51s 2026-02-28 01:11:10.566825 | orchestrator | designate : Running Designate bootstrap container ---------------------- 17.74s 2026-02-28 01:11:10.566829 | orchestrator | designate : Restart designate-backend-bind9 container ------------------ 14.04s 2026-02-28 01:11:10.566833 | orchestrator | designate : Restart designate-mdns container --------------------------- 12.81s 2026-02-28 01:11:10.566838 | orchestrator | designate : Restart designate-producer container ----------------------- 11.27s 2026-02-28 01:11:10.566842 | orchestrator | designate : Copying over pools.yaml ------------------------------------ 10.39s 2026-02-28 01:11:10.566846 | orchestrator | designate : Restart designate-central container ------------------------- 9.04s 2026-02-28 01:11:10.566850 | orchestrator | designate : Non-destructive DNS pools update ---------------------------- 7.78s 2026-02-28 01:11:10.566854 | orchestrator | service-cert-copy : designate | Copying over extra CA certificates ------ 7.78s 2026-02-28 01:11:10.566858 | orchestrator | designate : Copying over config.json files for services ----------------- 7.10s 2026-02-28 01:11:10.566861 | orchestrator | designate : Restart designate-worker container -------------------------- 6.75s 2026-02-28 01:11:10.566865 | orchestrator | service-ks-register : designate | Creating/deleting endpoints ----------- 6.37s 2026-02-28 01:11:10.566869 | orchestrator | designate : Restart designate-api container ----------------------------- 6.34s 2026-02-28 01:11:10.566873 | orchestrator | service-check-containers : designate | Check containers ----------------- 5.25s 2026-02-28 01:11:10.566877 | orchestrator | designate : Ensuring config directories exist --------------------------- 5.08s 2026-02-28 01:11:10.566881 | orchestrator | service-ks-register : designate | Creating users ------------------------ 4.85s 2026-02-28 01:11:10.566884 | orchestrator | designate : Copying over named.conf ------------------------------------- 4.23s 2026-02-28 01:11:10.566888 | orchestrator | service-ks-register : designate | Granting/revoking user roles ---------- 4.13s 2026-02-28 01:11:10.566892 | orchestrator | designate : Copying over rndc.conf -------------------------------------- 3.93s 2026-02-28 01:11:10.566896 | orchestrator | service-ks-register : designate | Creating roles ------------------------ 3.75s 2026-02-28 01:11:10.566900 | orchestrator | 2026-02-28 01:11:10 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:11:10.568536 | orchestrator | 2026-02-28 01:11:10 | INFO  | Task 38bfc1a7-6f10-43be-84c9-9e685e24f608 is in state STARTED 2026-02-28 01:11:10.568607 | orchestrator | 2026-02-28 01:11:10 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:11:13.607000 | orchestrator | 2026-02-28 01:11:13 | INFO  | Task e5fe25b6-f246-46db-bfae-26ef50103f65 is in state STARTED 2026-02-28 01:11:13.609762 | orchestrator | 2026-02-28 01:11:13 | INFO  | Task c15b47cf-1c0f-42de-b373-e0eb82491ed1 is in state STARTED 2026-02-28 01:11:13.613695 | orchestrator | 2026-02-28 01:11:13 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:11:13.616262 | orchestrator | 2026-02-28 01:11:13 | INFO  | Task 38bfc1a7-6f10-43be-84c9-9e685e24f608 is in state STARTED 2026-02-28 01:11:13.616336 | orchestrator | 2026-02-28 01:11:13 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:11:16.663162 | orchestrator | 2026-02-28 01:11:16 | INFO  | Task e5fe25b6-f246-46db-bfae-26ef50103f65 is in state STARTED 2026-02-28 01:11:16.663775 | orchestrator | 2026-02-28 01:11:16 | INFO  | Task c15b47cf-1c0f-42de-b373-e0eb82491ed1 is in state STARTED 2026-02-28 01:11:16.665098 | orchestrator | 2026-02-28 01:11:16 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:11:16.665774 | orchestrator | 2026-02-28 01:11:16 | INFO  | Task 38bfc1a7-6f10-43be-84c9-9e685e24f608 is in state STARTED 2026-02-28 01:11:16.665813 | orchestrator | 2026-02-28 01:11:16 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:11:19.703615 | orchestrator | 2026-02-28 01:11:19 | INFO  | Task e5fe25b6-f246-46db-bfae-26ef50103f65 is in state STARTED 2026-02-28 01:11:19.705500 | orchestrator | 2026-02-28 01:11:19 | INFO  | Task c15b47cf-1c0f-42de-b373-e0eb82491ed1 is in state STARTED 2026-02-28 01:11:19.707584 | orchestrator | 2026-02-28 01:11:19 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:11:19.709610 | orchestrator | 2026-02-28 01:11:19 | INFO  | Task 38bfc1a7-6f10-43be-84c9-9e685e24f608 is in state STARTED 2026-02-28 01:11:19.710067 | orchestrator | 2026-02-28 01:11:19 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:11:22.749869 | orchestrator | 2026-02-28 01:11:22 | INFO  | Task e5fe25b6-f246-46db-bfae-26ef50103f65 is in state STARTED 2026-02-28 01:11:22.750202 | orchestrator | 2026-02-28 01:11:22 | INFO  | Task c15b47cf-1c0f-42de-b373-e0eb82491ed1 is in state STARTED 2026-02-28 01:11:22.751690 | orchestrator | 2026-02-28 01:11:22 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:11:22.753217 | orchestrator | 2026-02-28 01:11:22 | INFO  | Task 38bfc1a7-6f10-43be-84c9-9e685e24f608 is in state STARTED 2026-02-28 01:11:22.753253 | orchestrator | 2026-02-28 01:11:22 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:11:25.806611 | orchestrator | 2026-02-28 01:11:25 | INFO  | Task e5fe25b6-f246-46db-bfae-26ef50103f65 is in state STARTED 2026-02-28 01:11:25.809155 | orchestrator | 2026-02-28 01:11:25 | INFO  | Task c15b47cf-1c0f-42de-b373-e0eb82491ed1 is in state STARTED 2026-02-28 01:11:25.809224 | orchestrator | 2026-02-28 01:11:25 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:11:25.810225 | orchestrator | 2026-02-28 01:11:25 | INFO  | Task 38bfc1a7-6f10-43be-84c9-9e685e24f608 is in state STARTED 2026-02-28 01:11:25.810260 | orchestrator | 2026-02-28 01:11:25 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:11:28.844381 | orchestrator | 2026-02-28 01:11:28 | INFO  | Task e5fe25b6-f246-46db-bfae-26ef50103f65 is in state STARTED 2026-02-28 01:11:28.844789 | orchestrator | 2026-02-28 01:11:28 | INFO  | Task c15b47cf-1c0f-42de-b373-e0eb82491ed1 is in state STARTED 2026-02-28 01:11:28.845988 | orchestrator | 2026-02-28 01:11:28 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:11:28.846781 | orchestrator | 2026-02-28 01:11:28 | INFO  | Task 38bfc1a7-6f10-43be-84c9-9e685e24f608 is in state STARTED 2026-02-28 01:11:28.848060 | orchestrator | 2026-02-28 01:11:28 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:11:31.882950 | orchestrator | 2026-02-28 01:11:31 | INFO  | Task e5fe25b6-f246-46db-bfae-26ef50103f65 is in state STARTED 2026-02-28 01:11:31.885680 | orchestrator | 2026-02-28 01:11:31 | INFO  | Task c15b47cf-1c0f-42de-b373-e0eb82491ed1 is in state STARTED 2026-02-28 01:11:31.888417 | orchestrator | 2026-02-28 01:11:31 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:11:31.890332 | orchestrator | 2026-02-28 01:11:31 | INFO  | Task 38bfc1a7-6f10-43be-84c9-9e685e24f608 is in state STARTED 2026-02-28 01:11:31.891140 | orchestrator | 2026-02-28 01:11:31 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:11:34.924051 | orchestrator | 2026-02-28 01:11:34 | INFO  | Task e5fe25b6-f246-46db-bfae-26ef50103f65 is in state STARTED 2026-02-28 01:11:34.924522 | orchestrator | 2026-02-28 01:11:34 | INFO  | Task c15b47cf-1c0f-42de-b373-e0eb82491ed1 is in state STARTED 2026-02-28 01:11:34.925272 | orchestrator | 2026-02-28 01:11:34 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:11:34.927329 | orchestrator | 2026-02-28 01:11:34 | INFO  | Task 38bfc1a7-6f10-43be-84c9-9e685e24f608 is in state STARTED 2026-02-28 01:11:34.927382 | orchestrator | 2026-02-28 01:11:34 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:11:37.973054 | orchestrator | 2026-02-28 01:11:37 | INFO  | Task e5fe25b6-f246-46db-bfae-26ef50103f65 is in state STARTED 2026-02-28 01:11:37.974797 | orchestrator | 2026-02-28 01:11:37 | INFO  | Task c15b47cf-1c0f-42de-b373-e0eb82491ed1 is in state STARTED 2026-02-28 01:11:37.977147 | orchestrator | 2026-02-28 01:11:37 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:11:37.979189 | orchestrator | 2026-02-28 01:11:37 | INFO  | Task 38bfc1a7-6f10-43be-84c9-9e685e24f608 is in state STARTED 2026-02-28 01:11:37.979223 | orchestrator | 2026-02-28 01:11:37 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:11:41.023952 | orchestrator | 2026-02-28 01:11:41 | INFO  | Task e5fe25b6-f246-46db-bfae-26ef50103f65 is in state STARTED 2026-02-28 01:11:41.024932 | orchestrator | 2026-02-28 01:11:41 | INFO  | Task c15b47cf-1c0f-42de-b373-e0eb82491ed1 is in state STARTED 2026-02-28 01:11:41.026537 | orchestrator | 2026-02-28 01:11:41 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:11:41.027990 | orchestrator | 2026-02-28 01:11:41 | INFO  | Task 38bfc1a7-6f10-43be-84c9-9e685e24f608 is in state STARTED 2026-02-28 01:11:41.028476 | orchestrator | 2026-02-28 01:11:41 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:11:44.061889 | orchestrator | 2026-02-28 01:11:44 | INFO  | Task e5fe25b6-f246-46db-bfae-26ef50103f65 is in state STARTED 2026-02-28 01:11:44.063276 | orchestrator | 2026-02-28 01:11:44 | INFO  | Task c15b47cf-1c0f-42de-b373-e0eb82491ed1 is in state STARTED 2026-02-28 01:11:44.065361 | orchestrator | 2026-02-28 01:11:44 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:11:44.069443 | orchestrator | 2026-02-28 01:11:44 | INFO  | Task 38bfc1a7-6f10-43be-84c9-9e685e24f608 is in state STARTED 2026-02-28 01:11:44.069536 | orchestrator | 2026-02-28 01:11:44 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:11:47.106971 | orchestrator | 2026-02-28 01:11:47 | INFO  | Task e5fe25b6-f246-46db-bfae-26ef50103f65 is in state STARTED 2026-02-28 01:11:47.107257 | orchestrator | 2026-02-28 01:11:47 | INFO  | Task c15b47cf-1c0f-42de-b373-e0eb82491ed1 is in state STARTED 2026-02-28 01:11:47.109243 | orchestrator | 2026-02-28 01:11:47 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:11:47.110303 | orchestrator | 2026-02-28 01:11:47 | INFO  | Task 38bfc1a7-6f10-43be-84c9-9e685e24f608 is in state STARTED 2026-02-28 01:11:47.110348 | orchestrator | 2026-02-28 01:11:47 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:11:50.160375 | orchestrator | 2026-02-28 01:11:50 | INFO  | Task e5fe25b6-f246-46db-bfae-26ef50103f65 is in state STARTED 2026-02-28 01:11:50.162884 | orchestrator | 2026-02-28 01:11:50 | INFO  | Task c15b47cf-1c0f-42de-b373-e0eb82491ed1 is in state STARTED 2026-02-28 01:11:50.166177 | orchestrator | 2026-02-28 01:11:50 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:11:50.167457 | orchestrator | 2026-02-28 01:11:50 | INFO  | Task 38bfc1a7-6f10-43be-84c9-9e685e24f608 is in state STARTED 2026-02-28 01:11:50.167825 | orchestrator | 2026-02-28 01:11:50 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:11:53.210351 | orchestrator | 2026-02-28 01:11:53 | INFO  | Task e5fe25b6-f246-46db-bfae-26ef50103f65 is in state STARTED 2026-02-28 01:11:53.211816 | orchestrator | 2026-02-28 01:11:53 | INFO  | Task c15b47cf-1c0f-42de-b373-e0eb82491ed1 is in state STARTED 2026-02-28 01:11:53.214311 | orchestrator | 2026-02-28 01:11:53 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:11:53.215271 | orchestrator | 2026-02-28 01:11:53 | INFO  | Task 38bfc1a7-6f10-43be-84c9-9e685e24f608 is in state STARTED 2026-02-28 01:11:53.215306 | orchestrator | 2026-02-28 01:11:53 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:11:56.262103 | orchestrator | 2026-02-28 01:11:56 | INFO  | Task e5fe25b6-f246-46db-bfae-26ef50103f65 is in state STARTED 2026-02-28 01:11:56.264456 | orchestrator | 2026-02-28 01:11:56 | INFO  | Task c15b47cf-1c0f-42de-b373-e0eb82491ed1 is in state STARTED 2026-02-28 01:11:56.267244 | orchestrator | 2026-02-28 01:11:56 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:11:56.270444 | orchestrator | 2026-02-28 01:11:56 | INFO  | Task 38bfc1a7-6f10-43be-84c9-9e685e24f608 is in state STARTED 2026-02-28 01:11:56.270868 | orchestrator | 2026-02-28 01:11:56 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:11:59.304139 | orchestrator | 2026-02-28 01:11:59 | INFO  | Task e5fe25b6-f246-46db-bfae-26ef50103f65 is in state STARTED 2026-02-28 01:11:59.304236 | orchestrator | 2026-02-28 01:11:59 | INFO  | Task c15b47cf-1c0f-42de-b373-e0eb82491ed1 is in state STARTED 2026-02-28 01:11:59.305448 | orchestrator | 2026-02-28 01:11:59 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:11:59.306570 | orchestrator | 2026-02-28 01:11:59 | INFO  | Task 38bfc1a7-6f10-43be-84c9-9e685e24f608 is in state STARTED 2026-02-28 01:11:59.306698 | orchestrator | 2026-02-28 01:11:59 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:12:02.343466 | orchestrator | 2026-02-28 01:12:02 | INFO  | Task e5fe25b6-f246-46db-bfae-26ef50103f65 is in state STARTED 2026-02-28 01:12:02.344254 | orchestrator | 2026-02-28 01:12:02 | INFO  | Task c15b47cf-1c0f-42de-b373-e0eb82491ed1 is in state STARTED 2026-02-28 01:12:02.346888 | orchestrator | 2026-02-28 01:12:02 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:12:02.348033 | orchestrator | 2026-02-28 01:12:02 | INFO  | Task 38bfc1a7-6f10-43be-84c9-9e685e24f608 is in state STARTED 2026-02-28 01:12:02.348095 | orchestrator | 2026-02-28 01:12:02 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:12:05.395125 | orchestrator | 2026-02-28 01:12:05 | INFO  | Task e5fe25b6-f246-46db-bfae-26ef50103f65 is in state STARTED 2026-02-28 01:12:05.398105 | orchestrator | 2026-02-28 01:12:05 | INFO  | Task c15b47cf-1c0f-42de-b373-e0eb82491ed1 is in state STARTED 2026-02-28 01:12:05.398483 | orchestrator | 2026-02-28 01:12:05 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:12:05.417808 | orchestrator | 2026-02-28 01:12:05 | INFO  | Task 38bfc1a7-6f10-43be-84c9-9e685e24f608 is in state STARTED 2026-02-28 01:12:05.417884 | orchestrator | 2026-02-28 01:12:05 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:12:08.459003 | orchestrator | 2026-02-28 01:12:08 | INFO  | Task e5fe25b6-f246-46db-bfae-26ef50103f65 is in state STARTED 2026-02-28 01:12:08.459385 | orchestrator | 2026-02-28 01:12:08 | INFO  | Task c15b47cf-1c0f-42de-b373-e0eb82491ed1 is in state STARTED 2026-02-28 01:12:08.460240 | orchestrator | 2026-02-28 01:12:08 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:12:08.461120 | orchestrator | 2026-02-28 01:12:08 | INFO  | Task 38bfc1a7-6f10-43be-84c9-9e685e24f608 is in state STARTED 2026-02-28 01:12:08.461141 | orchestrator | 2026-02-28 01:12:08 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:12:11.496553 | orchestrator | 2026-02-28 01:12:11 | INFO  | Task e5fe25b6-f246-46db-bfae-26ef50103f65 is in state STARTED 2026-02-28 01:12:11.498982 | orchestrator | 2026-02-28 01:12:11 | INFO  | Task c15b47cf-1c0f-42de-b373-e0eb82491ed1 is in state SUCCESS 2026-02-28 01:12:11.500156 | orchestrator | 2026-02-28 01:12:11.500178 | orchestrator | 2026-02-28 01:12:11.500184 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-28 01:12:11.500190 | orchestrator | 2026-02-28 01:12:11.500194 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-28 01:12:11.500199 | orchestrator | Saturday 28 February 2026 01:06:38 +0000 (0:00:00.308) 0:00:00.308 ***** 2026-02-28 01:12:11.500203 | orchestrator | ok: [testbed-node-0] 2026-02-28 01:12:11.500209 | orchestrator | ok: [testbed-node-1] 2026-02-28 01:12:11.500213 | orchestrator | ok: [testbed-node-2] 2026-02-28 01:12:11.500217 | orchestrator | ok: [testbed-node-3] 2026-02-28 01:12:11.500221 | orchestrator | ok: [testbed-node-4] 2026-02-28 01:12:11.500225 | orchestrator | ok: [testbed-node-5] 2026-02-28 01:12:11.500229 | orchestrator | 2026-02-28 01:12:11.500234 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-28 01:12:11.500238 | orchestrator | Saturday 28 February 2026 01:06:39 +0000 (0:00:00.782) 0:00:01.090 ***** 2026-02-28 01:12:11.500242 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2026-02-28 01:12:11.500246 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2026-02-28 01:12:11.500250 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2026-02-28 01:12:11.500254 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2026-02-28 01:12:11.500258 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2026-02-28 01:12:11.500262 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2026-02-28 01:12:11.500266 | orchestrator | 2026-02-28 01:12:11.500270 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2026-02-28 01:12:11.500274 | orchestrator | 2026-02-28 01:12:11.500278 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-02-28 01:12:11.500282 | orchestrator | Saturday 28 February 2026 01:06:40 +0000 (0:00:00.943) 0:00:02.033 ***** 2026-02-28 01:12:11.500288 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-28 01:12:11.500312 | orchestrator | 2026-02-28 01:12:11.500316 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2026-02-28 01:12:11.500320 | orchestrator | Saturday 28 February 2026 01:06:42 +0000 (0:00:01.551) 0:00:03.585 ***** 2026-02-28 01:12:11.500324 | orchestrator | ok: [testbed-node-3] 2026-02-28 01:12:11.500328 | orchestrator | ok: [testbed-node-0] 2026-02-28 01:12:11.500332 | orchestrator | ok: [testbed-node-2] 2026-02-28 01:12:11.500336 | orchestrator | ok: [testbed-node-1] 2026-02-28 01:12:11.500340 | orchestrator | ok: [testbed-node-4] 2026-02-28 01:12:11.500344 | orchestrator | ok: [testbed-node-5] 2026-02-28 01:12:11.500348 | orchestrator | 2026-02-28 01:12:11.500352 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2026-02-28 01:12:11.500356 | orchestrator | Saturday 28 February 2026 01:06:43 +0000 (0:00:01.690) 0:00:05.275 ***** 2026-02-28 01:12:11.500360 | orchestrator | ok: [testbed-node-0] 2026-02-28 01:12:11.500364 | orchestrator | ok: [testbed-node-1] 2026-02-28 01:12:11.500368 | orchestrator | ok: [testbed-node-2] 2026-02-28 01:12:11.500372 | orchestrator | ok: [testbed-node-3] 2026-02-28 01:12:11.500376 | orchestrator | ok: [testbed-node-4] 2026-02-28 01:12:11.500380 | orchestrator | ok: [testbed-node-5] 2026-02-28 01:12:11.500383 | orchestrator | 2026-02-28 01:12:11.500387 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2026-02-28 01:12:11.500391 | orchestrator | Saturday 28 February 2026 01:06:45 +0000 (0:00:01.658) 0:00:06.934 ***** 2026-02-28 01:12:11.500396 | orchestrator | ok: [testbed-node-0] => { 2026-02-28 01:12:11.500400 | orchestrator |  "changed": false, 2026-02-28 01:12:11.500404 | orchestrator |  "msg": "All assertions passed" 2026-02-28 01:12:11.500409 | orchestrator | } 2026-02-28 01:12:11.500424 | orchestrator | ok: [testbed-node-1] => { 2026-02-28 01:12:11.500428 | orchestrator |  "changed": false, 2026-02-28 01:12:11.500432 | orchestrator |  "msg": "All assertions passed" 2026-02-28 01:12:11.500436 | orchestrator | } 2026-02-28 01:12:11.500440 | orchestrator | ok: [testbed-node-2] => { 2026-02-28 01:12:11.500444 | orchestrator |  "changed": false, 2026-02-28 01:12:11.500448 | orchestrator |  "msg": "All assertions passed" 2026-02-28 01:12:11.500452 | orchestrator | } 2026-02-28 01:12:11.500456 | orchestrator | ok: [testbed-node-3] => { 2026-02-28 01:12:11.500460 | orchestrator |  "changed": false, 2026-02-28 01:12:11.500464 | orchestrator |  "msg": "All assertions passed" 2026-02-28 01:12:11.500468 | orchestrator | } 2026-02-28 01:12:11.500472 | orchestrator | ok: [testbed-node-4] => { 2026-02-28 01:12:11.500476 | orchestrator |  "changed": false, 2026-02-28 01:12:11.500480 | orchestrator |  "msg": "All assertions passed" 2026-02-28 01:12:11.500484 | orchestrator | } 2026-02-28 01:12:11.500488 | orchestrator | ok: [testbed-node-5] => { 2026-02-28 01:12:11.500491 | orchestrator |  "changed": false, 2026-02-28 01:12:11.500495 | orchestrator |  "msg": "All assertions passed" 2026-02-28 01:12:11.500499 | orchestrator | } 2026-02-28 01:12:11.500503 | orchestrator | 2026-02-28 01:12:11.500507 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2026-02-28 01:12:11.500511 | orchestrator | Saturday 28 February 2026 01:06:46 +0000 (0:00:01.059) 0:00:07.993 ***** 2026-02-28 01:12:11.500515 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:12:11.500519 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:12:11.500523 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:12:11.500527 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:12:11.500530 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:12:11.500534 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:12:11.500538 | orchestrator | 2026-02-28 01:12:11.500542 | orchestrator | TASK [service-ks-register : neutron | Creating/deleting services] ************** 2026-02-28 01:12:11.500546 | orchestrator | Saturday 28 February 2026 01:06:47 +0000 (0:00:00.820) 0:00:08.814 ***** 2026-02-28 01:12:11.500550 | orchestrator | changed: [testbed-node-0] => (item=neutron (network)) 2026-02-28 01:12:11.500554 | orchestrator | 2026-02-28 01:12:11.500558 | orchestrator | TASK [service-ks-register : neutron | Creating/deleting endpoints] ************* 2026-02-28 01:12:11.500566 | orchestrator | Saturday 28 February 2026 01:06:51 +0000 (0:00:03.795) 0:00:12.610 ***** 2026-02-28 01:12:11.500570 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api-int.testbed.osism.xyz:9696 -> internal) 2026-02-28 01:12:11.500575 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api.testbed.osism.xyz:9696 -> public) 2026-02-28 01:12:11.500579 | orchestrator | 2026-02-28 01:12:11.500589 | orchestrator | TASK [service-ks-register : neutron | Creating projects] *********************** 2026-02-28 01:12:11.500615 | orchestrator | Saturday 28 February 2026 01:06:58 +0000 (0:00:07.784) 0:00:20.394 ***** 2026-02-28 01:12:11.500620 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-28 01:12:11.500624 | orchestrator | 2026-02-28 01:12:11.500627 | orchestrator | TASK [service-ks-register : neutron | Creating users] ************************** 2026-02-28 01:12:11.500631 | orchestrator | Saturday 28 February 2026 01:07:02 +0000 (0:00:03.808) 0:00:24.203 ***** 2026-02-28 01:12:11.500635 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service) 2026-02-28 01:12:11.500640 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-28 01:12:11.500644 | orchestrator | 2026-02-28 01:12:11.500648 | orchestrator | TASK [service-ks-register : neutron | Creating roles] ************************** 2026-02-28 01:12:11.500652 | orchestrator | Saturday 28 February 2026 01:07:07 +0000 (0:00:04.507) 0:00:28.710 ***** 2026-02-28 01:12:11.500656 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-28 01:12:11.500660 | orchestrator | 2026-02-28 01:12:11.500665 | orchestrator | TASK [service-ks-register : neutron | Granting/revoking user roles] ************ 2026-02-28 01:12:11.500671 | orchestrator | Saturday 28 February 2026 01:07:11 +0000 (0:00:03.794) 0:00:32.505 ***** 2026-02-28 01:12:11.500677 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> admin) 2026-02-28 01:12:11.500683 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> service) 2026-02-28 01:12:11.500690 | orchestrator | 2026-02-28 01:12:11.500697 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-02-28 01:12:11.500703 | orchestrator | Saturday 28 February 2026 01:07:19 +0000 (0:00:08.129) 0:00:40.635 ***** 2026-02-28 01:12:11.500708 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:12:11.500715 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:12:11.500720 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:12:11.500876 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:12:11.500883 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:12:11.500887 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:12:11.500891 | orchestrator | 2026-02-28 01:12:11.500895 | orchestrator | TASK [Load and persist kernel modules] ***************************************** 2026-02-28 01:12:11.500899 | orchestrator | Saturday 28 February 2026 01:07:19 +0000 (0:00:00.826) 0:00:41.462 ***** 2026-02-28 01:12:11.500903 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:12:11.500907 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:12:11.500911 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:12:11.500915 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:12:11.500919 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:12:11.500923 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:12:11.500927 | orchestrator | 2026-02-28 01:12:11.500930 | orchestrator | TASK [neutron : Check IPv6 support] ******************************************** 2026-02-28 01:12:11.500935 | orchestrator | Saturday 28 February 2026 01:07:22 +0000 (0:00:02.045) 0:00:43.508 ***** 2026-02-28 01:12:11.500938 | orchestrator | ok: [testbed-node-1] 2026-02-28 01:12:11.500943 | orchestrator | ok: [testbed-node-2] 2026-02-28 01:12:11.500947 | orchestrator | ok: [testbed-node-0] 2026-02-28 01:12:11.500951 | orchestrator | ok: [testbed-node-3] 2026-02-28 01:12:11.500955 | orchestrator | ok: [testbed-node-4] 2026-02-28 01:12:11.500958 | orchestrator | ok: [testbed-node-5] 2026-02-28 01:12:11.500962 | orchestrator | 2026-02-28 01:12:11.500966 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-02-28 01:12:11.500970 | orchestrator | Saturday 28 February 2026 01:07:23 +0000 (0:00:01.183) 0:00:44.691 ***** 2026-02-28 01:12:11.500981 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:12:11.500984 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:12:11.500988 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:12:11.500997 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:12:11.501001 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:12:11.501005 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:12:11.501010 | orchestrator | 2026-02-28 01:12:11.501014 | orchestrator | TASK [neutron : Ensuring config directories exist] ***************************** 2026-02-28 01:12:11.501018 | orchestrator | Saturday 28 February 2026 01:07:25 +0000 (0:00:01.917) 0:00:46.608 ***** 2026-02-28 01:12:11.501025 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-28 01:12:11.501042 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-28 01:12:11.501048 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-28 01:12:11.501054 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-28 01:12:11.501065 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-28 01:12:11.501071 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-28 01:12:11.501075 | orchestrator | 2026-02-28 01:12:11.501080 | orchestrator | TASK [neutron : Check if extra ml2 plugins exists] ***************************** 2026-02-28 01:12:11.501084 | orchestrator | Saturday 28 February 2026 01:07:29 +0000 (0:00:04.606) 0:00:51.214 ***** 2026-02-28 01:12:11.501089 | orchestrator | [WARNING]: Skipped 2026-02-28 01:12:11.501094 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' path 2026-02-28 01:12:11.501102 | orchestrator | due to this access issue: 2026-02-28 01:12:11.501106 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' is not 2026-02-28 01:12:11.501111 | orchestrator | a directory 2026-02-28 01:12:11.501115 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-28 01:12:11.501120 | orchestrator | 2026-02-28 01:12:11.501124 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-02-28 01:12:11.501128 | orchestrator | Saturday 28 February 2026 01:07:30 +0000 (0:00:00.928) 0:00:52.142 ***** 2026-02-28 01:12:11.501133 | orchestrator | included: /ansible/roles/neutron/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-28 01:12:11.501139 | orchestrator | 2026-02-28 01:12:11.501143 | orchestrator | TASK [service-cert-copy : neutron | Copying over extra CA certificates] ******** 2026-02-28 01:12:11.501147 | orchestrator | Saturday 28 February 2026 01:07:31 +0000 (0:00:01.129) 0:00:53.271 ***** 2026-02-28 01:12:11.501152 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-28 01:12:11.501165 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-28 01:12:11.501170 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-28 01:12:11.501180 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-28 01:12:11.501185 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-28 01:12:11.501189 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-28 01:12:11.501201 | orchestrator | 2026-02-28 01:12:11.501206 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS certificate] *** 2026-02-28 01:12:11.501210 | orchestrator | Saturday 28 February 2026 01:07:34 +0000 (0:00:02.701) 0:00:55.973 ***** 2026-02-28 01:12:11.501218 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-02-28 01:12:11.501223 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-02-28 01:12:11.501228 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:12:11.501232 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:12:11.501240 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-28 01:12:11.501245 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:12:11.501265 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-02-28 01:12:11.501277 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:12:11.501285 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-28 01:12:11.501290 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:12:11.501294 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-28 01:12:11.501299 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:12:11.501303 | orchestrator | 2026-02-28 01:12:11.501308 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS key] ***** 2026-02-28 01:12:11.501312 | orchestrator | Saturday 28 February 2026 01:07:37 +0000 (0:00:03.237) 0:00:59.210 ***** 2026-02-28 01:12:11.501320 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-02-28 01:12:11.501325 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:12:11.501330 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-02-28 01:12:11.501338 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:12:11.501342 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-28 01:12:11.501362 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:12:11.501374 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-02-28 01:12:11.501379 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:12:11.501387 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-28 01:12:11.501391 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:12:11.501396 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-28 01:12:11.501405 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:12:11.501409 | orchestrator | 2026-02-28 01:12:11.501414 | orchestrator | TASK [neutron : Creating TLS backend PEM File] ********************************* 2026-02-28 01:12:11.501418 | orchestrator | Saturday 28 February 2026 01:07:40 +0000 (0:00:03.047) 0:01:02.257 ***** 2026-02-28 01:12:11.501423 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:12:11.501427 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:12:11.501431 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:12:11.501436 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:12:11.501440 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:12:11.501444 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:12:11.501449 | orchestrator | 2026-02-28 01:12:11.501453 | orchestrator | TASK [neutron : Check if policies shall be overwritten] ************************ 2026-02-28 01:12:11.501457 | orchestrator | Saturday 28 February 2026 01:07:43 +0000 (0:00:02.906) 0:01:05.164 ***** 2026-02-28 01:12:11.501462 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:12:11.501466 | orchestrator | 2026-02-28 01:12:11.501470 | orchestrator | TASK [neutron : Set neutron policy file] *************************************** 2026-02-28 01:12:11.501475 | orchestrator | Saturday 28 February 2026 01:07:43 +0000 (0:00:00.151) 0:01:05.316 ***** 2026-02-28 01:12:11.501479 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:12:11.501483 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:12:11.501488 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:12:11.501492 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:12:11.501496 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:12:11.501501 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:12:11.501505 | orchestrator | 2026-02-28 01:12:11.501523 | orchestrator | TASK [neutron : Copying over existing policy file] ***************************** 2026-02-28 01:12:11.501528 | orchestrator | Saturday 28 February 2026 01:07:44 +0000 (0:00:00.801) 0:01:06.117 ***** 2026-02-28 01:12:11.501537 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-02-28 01:12:11.501542 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:12:11.501546 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-02-28 01:12:11.501936 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:12:11.501959 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-02-28 01:12:11.501965 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:12:11.501970 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-28 01:12:11.501975 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:12:11.501986 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-28 01:12:11.501991 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:12:11.501995 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-28 01:12:11.502008 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:12:11.502042 | orchestrator | 2026-02-28 01:12:11.502049 | orchestrator | TASK [neutron : Copying over config.json files for services] ******************* 2026-02-28 01:12:11.502054 | orchestrator | Saturday 28 February 2026 01:07:46 +0000 (0:00:02.164) 0:01:08.282 ***** 2026-02-28 01:12:11.502066 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-28 01:12:11.502071 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-28 01:12:11.502080 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-28 01:12:11.502085 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-28 01:12:11.502094 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-28 01:12:11.502116 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-28 01:12:11.502121 | orchestrator | 2026-02-28 01:12:11.502126 | orchestrator | TASK [neutron : Copying over neutron.conf] ************************************* 2026-02-28 01:12:11.502130 | orchestrator | Saturday 28 February 2026 01:07:50 +0000 (0:00:03.204) 0:01:11.486 ***** 2026-02-28 01:12:11.502135 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-28 01:12:11.502143 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-28 01:12:11.502147 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-28 01:12:11.502161 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-28 01:12:11.502166 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-28 01:12:11.502170 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-28 01:12:11.502175 | orchestrator | 2026-02-28 01:12:11.502179 | orchestrator | TASK [neutron : Copying over neutron_vpnaas.conf] ****************************** 2026-02-28 01:12:11.502184 | orchestrator | Saturday 28 February 2026 01:07:54 +0000 (0:00:04.912) 0:01:16.399 ***** 2026-02-28 01:12:11.502191 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-02-28 01:12:11.502200 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:12:11.502208 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-02-28 01:12:11.502212 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:12:11.502217 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-02-28 01:12:11.502222 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:12:11.502226 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-28 01:12:11.502231 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:12:11.502238 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-28 01:12:11.502251 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:12:11.502258 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-28 01:12:11.502269 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:12:11.502278 | orchestrator | 2026-02-28 01:12:11.502284 | orchestrator | TASK [neutron : Copying over ssh key] ****************************************** 2026-02-28 01:12:11.502291 | orchestrator | Saturday 28 February 2026 01:07:57 +0000 (0:00:02.947) 0:01:19.347 ***** 2026-02-28 01:12:11.502298 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:12:11.502305 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:12:11.502311 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:12:11.502318 | orchestrator | changed: [testbed-node-1] 2026-02-28 01:12:11.502325 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:12:11.502331 | orchestrator | changed: [testbed-node-2] 2026-02-28 01:12:11.502338 | orchestrator | 2026-02-28 01:12:11.502345 | orchestrator | TASK [neutron : Copying over ml2_conf.ini] ************************************* 2026-02-28 01:12:11.502356 | orchestrator | Saturday 28 February 2026 01:08:01 +0000 (0:00:03.723) 0:01:23.070 ***** 2026-02-28 01:12:11.502363 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-28 01:12:11.502370 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:12:11.502377 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-28 01:12:11.502384 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:12:11.502391 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-28 01:12:11.502413 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:12:11.502420 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-28 01:12:11.502434 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-28 01:12:11.502442 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-28 01:12:11.502451 | orchestrator | 2026-02-28 01:12:11.502455 | orchestrator | TASK [neutron : Copying over openvswitch_agent.ini] **************************** 2026-02-28 01:12:11.502459 | orchestrator | Saturday 28 February 2026 01:08:06 +0000 (0:00:04.889) 0:01:27.960 ***** 2026-02-28 01:12:11.502464 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:12:11.502468 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:12:11.502472 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:12:11.502481 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:12:11.502485 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:12:11.502490 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:12:11.502494 | orchestrator | 2026-02-28 01:12:11.502498 | orchestrator | TASK [neutron : Copying over sriov_agent.ini] ********************************** 2026-02-28 01:12:11.502503 | orchestrator | Saturday 28 February 2026 01:08:09 +0000 (0:00:03.233) 0:01:31.194 ***** 2026-02-28 01:12:11.502507 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:12:11.502512 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:12:11.502516 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:12:11.502520 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:12:11.502525 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:12:11.502529 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:12:11.502533 | orchestrator | 2026-02-28 01:12:11.502538 | orchestrator | TASK [neutron : Copying over mlnx_agent.ini] *********************************** 2026-02-28 01:12:11.502544 | orchestrator | Saturday 28 February 2026 01:08:14 +0000 (0:00:04.640) 0:01:35.834 ***** 2026-02-28 01:12:11.502549 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:12:11.502554 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:12:11.502559 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:12:11.502564 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:12:11.502572 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:12:11.502577 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:12:11.502582 | orchestrator | 2026-02-28 01:12:11.502587 | orchestrator | TASK [neutron : Copying over eswitchd.conf] ************************************ 2026-02-28 01:12:11.502592 | orchestrator | Saturday 28 February 2026 01:08:18 +0000 (0:00:03.919) 0:01:39.754 ***** 2026-02-28 01:12:11.502621 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:12:11.502626 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:12:11.502631 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:12:11.502636 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:12:11.502641 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:12:11.502645 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:12:11.502650 | orchestrator | 2026-02-28 01:12:11.502655 | orchestrator | TASK [neutron : Copying over dhcp_agent.ini] *********************************** 2026-02-28 01:12:11.502660 | orchestrator | Saturday 28 February 2026 01:08:22 +0000 (0:00:04.647) 0:01:44.401 ***** 2026-02-28 01:12:11.502666 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:12:11.502673 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:12:11.502683 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:12:11.502692 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:12:11.502701 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:12:11.502708 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:12:11.502715 | orchestrator | 2026-02-28 01:12:11.502722 | orchestrator | TASK [neutron : Copying over dnsmasq.conf] ************************************* 2026-02-28 01:12:11.502730 | orchestrator | Saturday 28 February 2026 01:08:26 +0000 (0:00:03.947) 0:01:48.349 ***** 2026-02-28 01:12:11.502736 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-02-28 01:12:11.502744 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:12:11.502751 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-02-28 01:12:11.502759 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:12:11.502766 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-02-28 01:12:11.502774 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:12:11.502781 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-02-28 01:12:11.502788 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:12:11.502796 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-02-28 01:12:11.502804 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:12:11.502817 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-02-28 01:12:11.502833 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:12:11.502839 | orchestrator | 2026-02-28 01:12:11.502844 | orchestrator | TASK [neutron : Copying over l3_agent.ini] ************************************* 2026-02-28 01:12:11.502849 | orchestrator | Saturday 28 February 2026 01:08:30 +0000 (0:00:04.081) 0:01:52.430 ***** 2026-02-28 01:12:11.502855 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-02-28 01:12:11.502861 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:12:11.502867 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-02-28 01:12:11.502872 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:12:11.502878 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-02-28 01:12:11.502883 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:12:11.502892 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-28 01:12:11.502902 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:12:11.502907 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-28 01:12:11.502912 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:12:11.502918 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-28 01:12:11.502923 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:12:11.502928 | orchestrator | 2026-02-28 01:12:11.502959 | orchestrator | TASK [neutron : Copying over fwaas_driver.ini] ********************************* 2026-02-28 01:12:11.502964 | orchestrator | Saturday 28 February 2026 01:08:34 +0000 (0:00:03.861) 0:01:56.292 ***** 2026-02-28 01:12:11.502971 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-02-28 01:12:11.502976 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:12:11.502981 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-02-28 01:12:11.502991 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:12:11.502999 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-02-28 01:12:11.503004 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:12:11.503009 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-28 01:12:11.503013 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:12:11.503021 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-28 01:12:11.503025 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:12:11.503030 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-28 01:12:11.503035 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:12:11.503043 | orchestrator | 2026-02-28 01:12:11.503047 | orchestrator | TASK [neutron : Copying over metadata_agent.ini] ******************************* 2026-02-28 01:12:11.503052 | orchestrator | Saturday 28 February 2026 01:08:37 +0000 (0:00:02.922) 0:01:59.215 ***** 2026-02-28 01:12:11.503056 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:12:11.503060 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:12:11.503065 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:12:11.503069 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:12:11.503074 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:12:11.503078 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:12:11.503082 | orchestrator | 2026-02-28 01:12:11.503087 | orchestrator | TASK [neutron : Copying over neutron_ovn_metadata_agent.ini] ******************* 2026-02-28 01:12:11.503091 | orchestrator | Saturday 28 February 2026 01:08:41 +0000 (0:00:04.114) 0:02:03.329 ***** 2026-02-28 01:12:11.503096 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:12:11.503100 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:12:11.503104 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:12:11.503109 | orchestrator | changed: [testbed-node-5] 2026-02-28 01:12:11.503113 | orchestrator | changed: [testbed-node-4] 2026-02-28 01:12:11.503117 | orchestrator | changed: [testbed-node-3] 2026-02-28 01:12:11.503123 | orchestrator | 2026-02-28 01:12:11.503136 | orchestrator | TASK [neutron : Copying over metering_agent.ini] ******************************* 2026-02-28 01:12:11.503146 | orchestrator | Saturday 28 February 2026 01:08:48 +0000 (0:00:06.805) 0:02:10.134 ***** 2026-02-28 01:12:11.503153 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:12:11.503160 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:12:11.503167 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:12:11.503173 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:12:11.503179 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:12:11.503185 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:12:11.503192 | orchestrator | 2026-02-28 01:12:11.503199 | orchestrator | TASK [neutron : Copying over ironic_neutron_agent.ini] ************************* 2026-02-28 01:12:11.503205 | orchestrator | Saturday 28 February 2026 01:08:52 +0000 (0:00:04.259) 0:02:14.394 ***** 2026-02-28 01:12:11.503211 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:12:11.503218 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:12:11.503225 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:12:11.503232 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:12:11.503239 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:12:11.503246 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:12:11.503253 | orchestrator | 2026-02-28 01:12:11.503260 | orchestrator | TASK [neutron : Copying over bgp_dragent.ini] ********************************** 2026-02-28 01:12:11.503268 | orchestrator | Saturday 28 February 2026 01:08:56 +0000 (0:00:03.498) 0:02:17.893 ***** 2026-02-28 01:12:11.503273 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:12:11.503277 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:12:11.503281 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:12:11.503285 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:12:11.503290 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:12:11.503294 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:12:11.503300 | orchestrator | 2026-02-28 01:12:11.503308 | orchestrator | TASK [neutron : Copying over ovn_agent.ini] ************************************ 2026-02-28 01:12:11.503315 | orchestrator | Saturday 28 February 2026 01:09:00 +0000 (0:00:03.681) 0:02:21.574 ***** 2026-02-28 01:12:11.503321 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:12:11.503353 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:12:11.503360 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:12:11.503367 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:12:11.503375 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:12:11.503382 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:12:11.503389 | orchestrator | 2026-02-28 01:12:11.503397 | orchestrator | TASK [neutron : Copying over nsx.ini] ****************************************** 2026-02-28 01:12:11.503411 | orchestrator | Saturday 28 February 2026 01:09:03 +0000 (0:00:03.574) 0:02:25.149 ***** 2026-02-28 01:12:11.503419 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:12:11.503425 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:12:11.503429 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:12:11.503433 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:12:11.503438 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:12:11.503442 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:12:11.503446 | orchestrator | 2026-02-28 01:12:11.503451 | orchestrator | TASK [neutron : Copy neutron-l3-agent-wrapper script] ************************** 2026-02-28 01:12:11.503455 | orchestrator | Saturday 28 February 2026 01:09:07 +0000 (0:00:04.004) 0:02:29.154 ***** 2026-02-28 01:12:11.503460 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:12:11.503464 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:12:11.503468 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:12:11.503473 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:12:11.503477 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:12:11.503481 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:12:11.503486 | orchestrator | 2026-02-28 01:12:11.503490 | orchestrator | TASK [neutron : Copying over extra ml2 plugins] ******************************** 2026-02-28 01:12:11.503499 | orchestrator | Saturday 28 February 2026 01:09:13 +0000 (0:00:05.637) 0:02:34.791 ***** 2026-02-28 01:12:11.503504 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:12:11.503508 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:12:11.503512 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:12:11.503517 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:12:11.503521 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:12:11.503525 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:12:11.503530 | orchestrator | 2026-02-28 01:12:11.503534 | orchestrator | TASK [neutron : Copying over neutron-tls-proxy.cfg] **************************** 2026-02-28 01:12:11.503538 | orchestrator | Saturday 28 February 2026 01:09:17 +0000 (0:00:04.452) 0:02:39.244 ***** 2026-02-28 01:12:11.503543 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-02-28 01:12:11.503548 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:12:11.503552 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-02-28 01:12:11.503557 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:12:11.503561 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-02-28 01:12:11.503566 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:12:11.503570 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-02-28 01:12:11.503575 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:12:11.503579 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-02-28 01:12:11.503584 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:12:11.503588 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-02-28 01:12:11.503695 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:12:11.503738 | orchestrator | 2026-02-28 01:12:11.503743 | orchestrator | TASK [neutron : Copying over neutron_taas.conf] ******************************** 2026-02-28 01:12:11.503748 | orchestrator | Saturday 28 February 2026 01:09:20 +0000 (0:00:02.814) 0:02:42.059 ***** 2026-02-28 01:12:11.503762 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-02-28 01:12:11.503775 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:12:11.503780 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-28 01:12:11.503785 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:12:11.503794 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-02-28 01:12:11.503799 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:12:11.503804 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-28 01:12:11.503808 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:12:11.503819 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-02-28 01:12:11.503828 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:12:11.503832 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-28 01:12:11.503837 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:12:11.503841 | orchestrator | 2026-02-28 01:12:11.503846 | orchestrator | TASK [service-check-containers : neutron | Check containers] ******************* 2026-02-28 01:12:11.503850 | orchestrator | Saturday 28 February 2026 01:09:24 +0000 (0:00:03.735) 0:02:45.795 ***** 2026-02-28 01:12:11.503855 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-28 01:12:11.503866 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-28 01:12:11.503874 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-28 01:12:11.503884 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-28 01:12:11.503888 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-28 01:12:11.503896 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-28 01:12:11.503901 | orchestrator | 2026-02-28 01:12:11.503906 | orchestrator | TASK [service-check-containers : neutron | Notify handlers to restart containers] *** 2026-02-28 01:12:11.503919 | orchestrator | Saturday 28 February 2026 01:09:28 +0000 (0:00:04.078) 0:02:49.873 ***** 2026-02-28 01:12:11.503923 | orchestrator | changed: [testbed-node-0] => { 2026-02-28 01:12:11.503928 | orchestrator |  "msg": "Notifying handlers" 2026-02-28 01:12:11.503933 | orchestrator | } 2026-02-28 01:12:11.503937 | orchestrator | changed: [testbed-node-1] => { 2026-02-28 01:12:11.503942 | orchestrator |  "msg": "Notifying handlers" 2026-02-28 01:12:11.503946 | orchestrator | } 2026-02-28 01:12:11.503951 | orchestrator | changed: [testbed-node-2] => { 2026-02-28 01:12:11.503955 | orchestrator |  "msg": "Notifying handlers" 2026-02-28 01:12:11.503960 | orchestrator | } 2026-02-28 01:12:11.503964 | orchestrator | changed: [testbed-node-3] => { 2026-02-28 01:12:11.503993 | orchestrator |  "msg": "Notifying handlers" 2026-02-28 01:12:11.503998 | orchestrator | } 2026-02-28 01:12:11.504002 | orchestrator | changed: [testbed-node-4] => { 2026-02-28 01:12:11.504006 | orchestrator |  "msg": "Notifying handlers" 2026-02-28 01:12:11.504011 | orchestrator | } 2026-02-28 01:12:11.504015 | orchestrator | changed: [testbed-node-5] => { 2026-02-28 01:12:11.504019 | orchestrator |  "msg": "Notifying handlers" 2026-02-28 01:12:11.504035 | orchestrator | } 2026-02-28 01:12:11.504040 | orchestrator | 2026-02-28 01:12:11.504044 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-28 01:12:11.504049 | orchestrator | Saturday 28 February 2026 01:09:29 +0000 (0:00:01.197) 0:02:51.072 ***** 2026-02-28 01:12:11.504058 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-02-28 01:12:11.504063 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:12:11.504068 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-02-28 01:12:11.504073 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:12:11.504077 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-02-28 01:12:11.504085 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:12:11.504089 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-28 01:12:11.504098 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:12:11.504102 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-28 01:12:11.504107 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:12:11.504116 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-28 01:12:11.504121 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:12:11.504125 | orchestrator | 2026-02-28 01:12:11.504129 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-02-28 01:12:11.504134 | orchestrator | Saturday 28 February 2026 01:09:32 +0000 (0:00:03.132) 0:02:54.204 ***** 2026-02-28 01:12:11.504138 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:12:11.504143 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:12:11.504147 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:12:11.504151 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:12:11.504156 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:12:11.504160 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:12:11.504164 | orchestrator | 2026-02-28 01:12:11.504169 | orchestrator | TASK [neutron : Creating Neutron database] ************************************* 2026-02-28 01:12:11.504173 | orchestrator | Saturday 28 February 2026 01:09:33 +0000 (0:00:00.735) 0:02:54.939 ***** 2026-02-28 01:12:11.504177 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:12:11.504182 | orchestrator | 2026-02-28 01:12:11.504186 | orchestrator | TASK [neutron : Creating Neutron database user and setting permissions] ******** 2026-02-28 01:12:11.504191 | orchestrator | Saturday 28 February 2026 01:09:36 +0000 (0:00:02.595) 0:02:57.535 ***** 2026-02-28 01:12:11.504195 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:12:11.504199 | orchestrator | 2026-02-28 01:12:11.504204 | orchestrator | TASK [neutron : Running Neutron bootstrap container] *************************** 2026-02-28 01:12:11.504208 | orchestrator | Saturday 28 February 2026 01:09:38 +0000 (0:00:02.758) 0:03:00.294 ***** 2026-02-28 01:12:11.504212 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:12:11.504217 | orchestrator | 2026-02-28 01:12:11.504221 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-02-28 01:12:11.504226 | orchestrator | Saturday 28 February 2026 01:10:29 +0000 (0:00:50.726) 0:03:51.020 ***** 2026-02-28 01:12:11.504230 | orchestrator | 2026-02-28 01:12:11.504234 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-02-28 01:12:11.504239 | orchestrator | Saturday 28 February 2026 01:10:29 +0000 (0:00:00.068) 0:03:51.088 ***** 2026-02-28 01:12:11.504247 | orchestrator | 2026-02-28 01:12:11.504251 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-02-28 01:12:11.504256 | orchestrator | Saturday 28 February 2026 01:10:29 +0000 (0:00:00.272) 0:03:51.361 ***** 2026-02-28 01:12:11.504260 | orchestrator | 2026-02-28 01:12:11.504264 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-02-28 01:12:11.504269 | orchestrator | Saturday 28 February 2026 01:10:29 +0000 (0:00:00.062) 0:03:51.424 ***** 2026-02-28 01:12:11.504273 | orchestrator | 2026-02-28 01:12:11.504280 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-02-28 01:12:11.504285 | orchestrator | Saturday 28 February 2026 01:10:30 +0000 (0:00:00.079) 0:03:51.504 ***** 2026-02-28 01:12:11.504289 | orchestrator | 2026-02-28 01:12:11.504294 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-02-28 01:12:11.504298 | orchestrator | Saturday 28 February 2026 01:10:30 +0000 (0:00:00.066) 0:03:51.570 ***** 2026-02-28 01:12:11.504302 | orchestrator | 2026-02-28 01:12:11.504307 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-server container] ******************* 2026-02-28 01:12:11.504311 | orchestrator | Saturday 28 February 2026 01:10:30 +0000 (0:00:00.068) 0:03:51.639 ***** 2026-02-28 01:12:11.504331 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:12:11.504336 | orchestrator | changed: [testbed-node-1] 2026-02-28 01:12:11.504340 | orchestrator | changed: [testbed-node-2] 2026-02-28 01:12:11.504344 | orchestrator | 2026-02-28 01:12:11.504349 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-ovn-metadata-agent container] ******* 2026-02-28 01:12:11.504353 | orchestrator | Saturday 28 February 2026 01:11:03 +0000 (0:00:33.415) 0:04:25.054 ***** 2026-02-28 01:12:11.504358 | orchestrator | changed: [testbed-node-3] 2026-02-28 01:12:11.504362 | orchestrator | changed: [testbed-node-4] 2026-02-28 01:12:11.504366 | orchestrator | changed: [testbed-node-5] 2026-02-28 01:12:11.504401 | orchestrator | 2026-02-28 01:12:11.504406 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-28 01:12:11.504411 | orchestrator | testbed-node-0 : ok=27  changed=16  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-02-28 01:12:11.504417 | orchestrator | testbed-node-1 : ok=17  changed=9  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2026-02-28 01:12:11.504421 | orchestrator | testbed-node-2 : ok=17  changed=9  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2026-02-28 01:12:11.504425 | orchestrator | testbed-node-3 : ok=16  changed=8  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-02-28 01:12:11.504655 | orchestrator | testbed-node-4 : ok=16  changed=8  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-02-28 01:12:11.504664 | orchestrator | testbed-node-5 : ok=16  changed=8  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-02-28 01:12:11.504669 | orchestrator | 2026-02-28 01:12:11.504673 | orchestrator | 2026-02-28 01:12:11.504678 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-28 01:12:11.504682 | orchestrator | Saturday 28 February 2026 01:12:07 +0000 (0:01:03.631) 0:05:28.686 ***** 2026-02-28 01:12:11.504687 | orchestrator | =============================================================================== 2026-02-28 01:12:11.504691 | orchestrator | neutron : Restart neutron-ovn-metadata-agent container ----------------- 63.63s 2026-02-28 01:12:11.504695 | orchestrator | neutron : Running Neutron bootstrap container -------------------------- 50.73s 2026-02-28 01:12:11.504700 | orchestrator | neutron : Restart neutron-server container ----------------------------- 33.42s 2026-02-28 01:12:11.504704 | orchestrator | service-ks-register : neutron | Granting/revoking user roles ------------ 8.13s 2026-02-28 01:12:11.504708 | orchestrator | service-ks-register : neutron | Creating/deleting endpoints ------------- 7.78s 2026-02-28 01:12:11.504718 | orchestrator | neutron : Copying over neutron_ovn_metadata_agent.ini ------------------- 6.80s 2026-02-28 01:12:11.504722 | orchestrator | neutron : Copy neutron-l3-agent-wrapper script -------------------------- 5.64s 2026-02-28 01:12:11.504726 | orchestrator | neutron : Copying over neutron.conf ------------------------------------- 4.91s 2026-02-28 01:12:11.504731 | orchestrator | neutron : Copying over ml2_conf.ini ------------------------------------- 4.89s 2026-02-28 01:12:11.504735 | orchestrator | neutron : Copying over eswitchd.conf ------------------------------------ 4.65s 2026-02-28 01:12:11.504739 | orchestrator | neutron : Copying over sriov_agent.ini ---------------------------------- 4.64s 2026-02-28 01:12:11.504744 | orchestrator | neutron : Ensuring config directories exist ----------------------------- 4.61s 2026-02-28 01:12:11.504748 | orchestrator | service-ks-register : neutron | Creating users -------------------------- 4.51s 2026-02-28 01:12:11.504752 | orchestrator | neutron : Copying over extra ml2 plugins -------------------------------- 4.45s 2026-02-28 01:12:11.504757 | orchestrator | neutron : Copying over metering_agent.ini ------------------------------- 4.26s 2026-02-28 01:12:11.504761 | orchestrator | neutron : Copying over metadata_agent.ini ------------------------------- 4.11s 2026-02-28 01:12:11.504765 | orchestrator | neutron : Copying over dnsmasq.conf ------------------------------------- 4.08s 2026-02-28 01:12:11.504770 | orchestrator | service-check-containers : neutron | Check containers ------------------- 4.08s 2026-02-28 01:12:11.504774 | orchestrator | neutron : Copying over nsx.ini ------------------------------------------ 4.00s 2026-02-28 01:12:11.504779 | orchestrator | neutron : Copying over dhcp_agent.ini ----------------------------------- 3.95s 2026-02-28 01:12:11.504783 | orchestrator | 2026-02-28 01:12:11 | INFO  | Task 7e6ed415-85c6-4ae1-b254-732bd25a8ed0 is in state STARTED 2026-02-28 01:12:11.504788 | orchestrator | 2026-02-28 01:12:11.504792 | orchestrator | 2026-02-28 01:12:11.504796 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-28 01:12:11.504801 | orchestrator | 2026-02-28 01:12:11.504805 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-28 01:12:11.504813 | orchestrator | Saturday 28 February 2026 01:10:46 +0000 (0:00:00.327) 0:00:00.327 ***** 2026-02-28 01:12:11.504818 | orchestrator | ok: [testbed-node-0] 2026-02-28 01:12:11.504822 | orchestrator | ok: [testbed-node-1] 2026-02-28 01:12:11.504826 | orchestrator | ok: [testbed-node-2] 2026-02-28 01:12:11.504831 | orchestrator | 2026-02-28 01:12:11.504835 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-28 01:12:11.504839 | orchestrator | Saturday 28 February 2026 01:10:47 +0000 (0:00:00.497) 0:00:00.824 ***** 2026-02-28 01:12:11.504844 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2026-02-28 01:12:11.504849 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2026-02-28 01:12:11.504853 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2026-02-28 01:12:11.504857 | orchestrator | 2026-02-28 01:12:11.504862 | orchestrator | PLAY [Apply role placement] **************************************************** 2026-02-28 01:12:11.504866 | orchestrator | 2026-02-28 01:12:11.504871 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-02-28 01:12:11.504875 | orchestrator | Saturday 28 February 2026 01:10:47 +0000 (0:00:00.538) 0:00:01.363 ***** 2026-02-28 01:12:11.504879 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 01:12:11.504885 | orchestrator | 2026-02-28 01:12:11.504889 | orchestrator | TASK [service-ks-register : placement | Creating/deleting services] ************ 2026-02-28 01:12:11.504893 | orchestrator | Saturday 28 February 2026 01:10:48 +0000 (0:00:00.590) 0:00:01.954 ***** 2026-02-28 01:12:11.504898 | orchestrator | changed: [testbed-node-0] => (item=placement (placement)) 2026-02-28 01:12:11.504902 | orchestrator | 2026-02-28 01:12:11.504906 | orchestrator | TASK [service-ks-register : placement | Creating/deleting endpoints] *********** 2026-02-28 01:12:11.504911 | orchestrator | Saturday 28 February 2026 01:10:52 +0000 (0:00:04.073) 0:00:06.027 ***** 2026-02-28 01:12:11.504919 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api-int.testbed.osism.xyz:8780 -> internal) 2026-02-28 01:12:11.504924 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api.testbed.osism.xyz:8780 -> public) 2026-02-28 01:12:11.504928 | orchestrator | 2026-02-28 01:12:11.504932 | orchestrator | TASK [service-ks-register : placement | Creating projects] ********************* 2026-02-28 01:12:11.504937 | orchestrator | Saturday 28 February 2026 01:10:59 +0000 (0:00:07.599) 0:00:13.627 ***** 2026-02-28 01:12:11.504941 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-28 01:12:11.504946 | orchestrator | 2026-02-28 01:12:11.504953 | orchestrator | TASK [service-ks-register : placement | Creating users] ************************ 2026-02-28 01:12:11.504958 | orchestrator | Saturday 28 February 2026 01:11:03 +0000 (0:00:03.669) 0:00:17.296 ***** 2026-02-28 01:12:11.504962 | orchestrator | changed: [testbed-node-0] => (item=placement -> service) 2026-02-28 01:12:11.504967 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-28 01:12:11.504971 | orchestrator | 2026-02-28 01:12:11.504975 | orchestrator | TASK [service-ks-register : placement | Creating roles] ************************ 2026-02-28 01:12:11.504979 | orchestrator | Saturday 28 February 2026 01:11:08 +0000 (0:00:04.496) 0:00:21.793 ***** 2026-02-28 01:12:11.504984 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-28 01:12:11.504988 | orchestrator | 2026-02-28 01:12:11.504993 | orchestrator | TASK [service-ks-register : placement | Granting/revoking user roles] ********** 2026-02-28 01:12:11.504997 | orchestrator | Saturday 28 February 2026 01:11:12 +0000 (0:00:04.204) 0:00:25.997 ***** 2026-02-28 01:12:11.505001 | orchestrator | changed: [testbed-node-0] => (item=placement -> service -> admin) 2026-02-28 01:12:11.505006 | orchestrator | 2026-02-28 01:12:11.505010 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-02-28 01:12:11.505014 | orchestrator | Saturday 28 February 2026 01:11:17 +0000 (0:00:05.315) 0:00:31.313 ***** 2026-02-28 01:12:11.505018 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:12:11.505023 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:12:11.505027 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:12:11.505031 | orchestrator | 2026-02-28 01:12:11.505036 | orchestrator | TASK [placement : Ensuring config directories exist] *************************** 2026-02-28 01:12:11.505040 | orchestrator | Saturday 28 February 2026 01:11:18 +0000 (0:00:00.693) 0:00:32.006 ***** 2026-02-28 01:12:11.505087 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-02-28 01:12:11.505097 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-02-28 01:12:11.505161 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-02-28 01:12:11.505168 | orchestrator | 2026-02-28 01:12:11.505173 | orchestrator | TASK [placement : Check if policies shall be overwritten] ********************** 2026-02-28 01:12:11.505177 | orchestrator | Saturday 28 February 2026 01:11:19 +0000 (0:00:01.130) 0:00:33.136 ***** 2026-02-28 01:12:11.505181 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:12:11.505186 | orchestrator | 2026-02-28 01:12:11.505190 | orchestrator | TASK [placement : Set placement policy file] *********************************** 2026-02-28 01:12:11.505195 | orchestrator | Saturday 28 February 2026 01:11:19 +0000 (0:00:00.122) 0:00:33.259 ***** 2026-02-28 01:12:11.505199 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:12:11.505204 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:12:11.505208 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:12:11.505212 | orchestrator | 2026-02-28 01:12:11.505217 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-02-28 01:12:11.505221 | orchestrator | Saturday 28 February 2026 01:11:20 +0000 (0:00:00.538) 0:00:33.797 ***** 2026-02-28 01:12:11.505226 | orchestrator | included: /ansible/roles/placement/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 01:12:11.505230 | orchestrator | 2026-02-28 01:12:11.505235 | orchestrator | TASK [service-cert-copy : placement | Copying over extra CA certificates] ****** 2026-02-28 01:12:11.505239 | orchestrator | Saturday 28 February 2026 01:11:20 +0000 (0:00:00.779) 0:00:34.577 ***** 2026-02-28 01:12:11.505244 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-02-28 01:12:11.505252 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-02-28 01:12:11.505264 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-02-28 01:12:11.505269 | orchestrator | 2026-02-28 01:12:11.505273 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS certificate] *** 2026-02-28 01:12:11.505278 | orchestrator | Saturday 28 February 2026 01:11:22 +0000 (0:00:02.015) 0:00:36.593 ***** 2026-02-28 01:12:11.505283 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-02-28 01:12:11.505288 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:12:11.505292 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-02-28 01:12:11.505306 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:12:11.505311 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-02-28 01:12:11.505316 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:12:11.505320 | orchestrator | 2026-02-28 01:12:11.505324 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS key] *** 2026-02-28 01:12:11.505329 | orchestrator | Saturday 28 February 2026 01:11:23 +0000 (0:00:00.849) 0:00:37.442 ***** 2026-02-28 01:12:11.505338 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-02-28 01:12:11.505343 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:12:11.505348 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-02-28 01:12:11.505356 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:12:11.505364 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-02-28 01:12:11.505368 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:12:11.505373 | orchestrator | 2026-02-28 01:12:11.505377 | orchestrator | TASK [placement : Copying over config.json files for services] ***************** 2026-02-28 01:12:11.505382 | orchestrator | Saturday 28 February 2026 01:11:24 +0000 (0:00:01.046) 0:00:38.488 ***** 2026-02-28 01:12:11.505387 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-02-28 01:12:11.505395 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-02-28 01:12:11.505401 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-02-28 01:12:11.505410 | orchestrator | 2026-02-28 01:12:11.505433 | orchestrator | TASK [placement : Copying over placement.conf] ********************************* 2026-02-28 01:12:11.505438 | orchestrator | Saturday 28 February 2026 01:11:26 +0000 (0:00:01.798) 0:00:40.287 ***** 2026-02-28 01:12:11.505446 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-02-28 01:12:11.505454 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-02-28 01:12:11.505459 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-02-28 01:12:11.505464 | orchestrator | 2026-02-28 01:12:11.505468 | orchestrator | TASK [placement : Copying over placement-api wsgi configuration] *************** 2026-02-28 01:12:11.505477 | orchestrator | Saturday 28 February 2026 01:11:30 +0000 (0:00:04.188) 0:00:44.476 ***** 2026-02-28 01:12:11.505482 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2)  2026-02-28 01:12:11.505486 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:12:11.505491 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2)  2026-02-28 01:12:11.505495 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:12:11.505500 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2)  2026-02-28 01:12:11.505504 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:12:11.505508 | orchestrator | 2026-02-28 01:12:11.505513 | orchestrator | TASK [Configure uWSGI for Placement] ******************************************* 2026-02-28 01:12:11.505517 | orchestrator | Saturday 28 February 2026 01:11:31 +0000 (0:00:00.609) 0:00:45.086 ***** 2026-02-28 01:12:11.505521 | orchestrator | included: service-uwsgi-config for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 01:12:11.505526 | orchestrator | 2026-02-28 01:12:11.505530 | orchestrator | TASK [service-uwsgi-config : Copying over placement-api uWSGI config] ********** 2026-02-28 01:12:11.505535 | orchestrator | Saturday 28 February 2026 01:11:32 +0000 (0:00:00.766) 0:00:45.852 ***** 2026-02-28 01:12:11.505539 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:12:11.505543 | orchestrator | changed: [testbed-node-2] 2026-02-28 01:12:11.505548 | orchestrator | changed: [testbed-node-1] 2026-02-28 01:12:11.505552 | orchestrator | 2026-02-28 01:12:11.505557 | orchestrator | TASK [placement : Copying over migrate-db.rc.j2 configuration] ***************** 2026-02-28 01:12:11.505564 | orchestrator | Saturday 28 February 2026 01:11:34 +0000 (0:00:02.653) 0:00:48.506 ***** 2026-02-28 01:12:11.505569 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:12:11.505573 | orchestrator | changed: [testbed-node-1] 2026-02-28 01:12:11.505578 | orchestrator | changed: [testbed-node-2] 2026-02-28 01:12:11.505582 | orchestrator | 2026-02-28 01:12:11.505587 | orchestrator | TASK [placement : Copying over existing policy file] *************************** 2026-02-28 01:12:11.505591 | orchestrator | Saturday 28 February 2026 01:11:36 +0000 (0:00:01.585) 0:00:50.091 ***** 2026-02-28 01:12:11.505617 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-02-28 01:12:11.505623 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:12:11.505632 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-02-28 01:12:11.505640 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:12:11.505645 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-02-28 01:12:11.505650 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:12:11.505654 | orchestrator | 2026-02-28 01:12:11.505659 | orchestrator | TASK [service-check-containers : placement | Check containers] ***************** 2026-02-28 01:12:11.505663 | orchestrator | Saturday 28 February 2026 01:11:36 +0000 (0:00:00.588) 0:00:50.680 ***** 2026-02-28 01:12:11.505671 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-02-28 01:12:11.505679 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-02-28 01:12:11.505685 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-02-28 01:12:11.505694 | orchestrator | 2026-02-28 01:12:11.505699 | orchestrator | TASK [service-check-containers : placement | Notify handlers to restart containers] *** 2026-02-28 01:12:11.505703 | orchestrator | Saturday 28 February 2026 01:11:38 +0000 (0:00:01.307) 0:00:51.988 ***** 2026-02-28 01:12:11.505708 | orchestrator | changed: [testbed-node-0] => { 2026-02-28 01:12:11.505712 | orchestrator |  "msg": "Notifying handlers" 2026-02-28 01:12:11.505717 | orchestrator | } 2026-02-28 01:12:11.505721 | orchestrator | changed: [testbed-node-1] => { 2026-02-28 01:12:11.505725 | orchestrator |  "msg": "Notifying handlers" 2026-02-28 01:12:11.505730 | orchestrator | } 2026-02-28 01:12:11.505734 | orchestrator | changed: [testbed-node-2] => { 2026-02-28 01:12:11.505739 | orchestrator |  "msg": "Notifying handlers" 2026-02-28 01:12:11.505743 | orchestrator | } 2026-02-28 01:12:11.505747 | orchestrator | 2026-02-28 01:12:11.505752 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-28 01:12:11.505756 | orchestrator | Saturday 28 February 2026 01:11:38 +0000 (0:00:00.640) 0:00:52.629 ***** 2026-02-28 01:12:11.505761 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL2026-02-28 01:12:11 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:12:11.505768 | orchestrator | 2026-02-28 01:12:11 | INFO  | Task 38bfc1a7-6f10-43be-84c9-9e685e24f608 is in state SUCCESS 2026-02-28 01:12:11.505773 | orchestrator | ', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-02-28 01:12:11.505779 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:12:11.505784 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-02-28 01:12:11.505794 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:12:11.505802 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-02-28 01:12:11.505808 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:12:11.505813 | orchestrator | 2026-02-28 01:12:11.505818 | orchestrator | TASK [placement : Creating placement databases] ******************************** 2026-02-28 01:12:11.505823 | orchestrator | Saturday 28 February 2026 01:11:39 +0000 (0:00:00.883) 0:00:53.512 ***** 2026-02-28 01:12:11.505828 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:12:11.505832 | orchestrator | 2026-02-28 01:12:11.505838 | orchestrator | TASK [placement : Creating placement databases user and setting permissions] *** 2026-02-28 01:12:11.505843 | orchestrator | Saturday 28 February 2026 01:11:42 +0000 (0:00:02.319) 0:00:55.831 ***** 2026-02-28 01:12:11.505848 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:12:11.505853 | orchestrator | 2026-02-28 01:12:11.505858 | orchestrator | TASK [placement : Running placement bootstrap container] *********************** 2026-02-28 01:12:11.505863 | orchestrator | Saturday 28 February 2026 01:11:44 +0000 (0:00:02.448) 0:00:58.280 ***** 2026-02-28 01:12:11.505868 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:12:11.505873 | orchestrator | 2026-02-28 01:12:11.505878 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-02-28 01:12:11.505883 | orchestrator | Saturday 28 February 2026 01:12:00 +0000 (0:00:15.645) 0:01:13.925 ***** 2026-02-28 01:12:11.505888 | orchestrator | 2026-02-28 01:12:11.505893 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-02-28 01:12:11.505898 | orchestrator | Saturday 28 February 2026 01:12:00 +0000 (0:00:00.182) 0:01:14.107 ***** 2026-02-28 01:12:11.505903 | orchestrator | 2026-02-28 01:12:11.505908 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-02-28 01:12:11.505913 | orchestrator | Saturday 28 February 2026 01:12:00 +0000 (0:00:00.535) 0:01:14.642 ***** 2026-02-28 01:12:11.505918 | orchestrator | 2026-02-28 01:12:11.505923 | orchestrator | RUNNING HANDLER [placement : Restart placement-api container] ****************** 2026-02-28 01:12:11.505928 | orchestrator | Saturday 28 February 2026 01:12:00 +0000 (0:00:00.075) 0:01:14.718 ***** 2026-02-28 01:12:11.505933 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:12:11.505938 | orchestrator | changed: [testbed-node-1] 2026-02-28 01:12:11.505946 | orchestrator | changed: [testbed-node-2] 2026-02-28 01:12:11.505951 | orchestrator | 2026-02-28 01:12:11.505956 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-28 01:12:11.505961 | orchestrator | testbed-node-0 : ok=23  changed=16  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-02-28 01:12:11.505971 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-28 01:12:11.505976 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-28 01:12:11.505981 | orchestrator | 2026-02-28 01:12:11.505986 | orchestrator | 2026-02-28 01:12:11.505991 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-28 01:12:11.505995 | orchestrator | Saturday 28 February 2026 01:12:07 +0000 (0:00:06.513) 0:01:21.231 ***** 2026-02-28 01:12:11.506000 | orchestrator | =============================================================================== 2026-02-28 01:12:11.506005 | orchestrator | placement : Running placement bootstrap container ---------------------- 15.64s 2026-02-28 01:12:11.506010 | orchestrator | service-ks-register : placement | Creating/deleting endpoints ----------- 7.60s 2026-02-28 01:12:11.506073 | orchestrator | placement : Restart placement-api container ----------------------------- 6.51s 2026-02-28 01:12:11.506078 | orchestrator | service-ks-register : placement | Granting/revoking user roles ---------- 5.32s 2026-02-28 01:12:11.506083 | orchestrator | service-ks-register : placement | Creating users ------------------------ 4.50s 2026-02-28 01:12:11.506088 | orchestrator | service-ks-register : placement | Creating roles ------------------------ 4.20s 2026-02-28 01:12:11.506093 | orchestrator | placement : Copying over placement.conf --------------------------------- 4.19s 2026-02-28 01:12:11.506098 | orchestrator | service-ks-register : placement | Creating/deleting services ------------ 4.07s 2026-02-28 01:12:11.506103 | orchestrator | service-ks-register : placement | Creating projects --------------------- 3.67s 2026-02-28 01:12:11.506108 | orchestrator | service-uwsgi-config : Copying over placement-api uWSGI config ---------- 2.65s 2026-02-28 01:12:11.506116 | orchestrator | placement : Creating placement databases user and setting permissions --- 2.45s 2026-02-28 01:12:11.506122 | orchestrator | placement : Creating placement databases -------------------------------- 2.32s 2026-02-28 01:12:11.506127 | orchestrator | service-cert-copy : placement | Copying over extra CA certificates ------ 2.02s 2026-02-28 01:12:11.506132 | orchestrator | placement : Copying over config.json files for services ----------------- 1.80s 2026-02-28 01:12:11.506137 | orchestrator | placement : Copying over migrate-db.rc.j2 configuration ----------------- 1.59s 2026-02-28 01:12:11.506142 | orchestrator | service-check-containers : placement | Check containers ----------------- 1.31s 2026-02-28 01:12:11.506147 | orchestrator | placement : Ensuring config directories exist --------------------------- 1.13s 2026-02-28 01:12:11.506152 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS key --- 1.05s 2026-02-28 01:12:11.506158 | orchestrator | service-check-containers : Include tasks -------------------------------- 0.88s 2026-02-28 01:12:11.506163 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS certificate --- 0.85s 2026-02-28 01:12:11.506439 | orchestrator | 2026-02-28 01:12:11 | INFO  | Task 0a31fa30-88d7-419e-9efd-0db2ebe25a0f is in state STARTED 2026-02-28 01:12:11.506448 | orchestrator | 2026-02-28 01:12:11 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:12:14.555441 | orchestrator | 2026-02-28 01:12:14 | INFO  | Task e5fe25b6-f246-46db-bfae-26ef50103f65 is in state STARTED 2026-02-28 01:12:14.556257 | orchestrator | 2026-02-28 01:12:14 | INFO  | Task 7e6ed415-85c6-4ae1-b254-732bd25a8ed0 is in state STARTED 2026-02-28 01:12:14.557129 | orchestrator | 2026-02-28 01:12:14 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:12:14.558154 | orchestrator | 2026-02-28 01:12:14 | INFO  | Task 0a31fa30-88d7-419e-9efd-0db2ebe25a0f is in state STARTED 2026-02-28 01:12:14.558868 | orchestrator | 2026-02-28 01:12:14 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:12:17.586006 | orchestrator | 2026-02-28 01:12:17 | INFO  | Task e5fe25b6-f246-46db-bfae-26ef50103f65 is in state STARTED 2026-02-28 01:12:17.586377 | orchestrator | 2026-02-28 01:12:17 | INFO  | Task 7e6ed415-85c6-4ae1-b254-732bd25a8ed0 is in state STARTED 2026-02-28 01:12:17.587190 | orchestrator | 2026-02-28 01:12:17 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:12:17.587993 | orchestrator | 2026-02-28 01:12:17 | INFO  | Task 0a31fa30-88d7-419e-9efd-0db2ebe25a0f is in state STARTED 2026-02-28 01:12:17.588051 | orchestrator | 2026-02-28 01:12:17 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:12:20.620993 | orchestrator | 2026-02-28 01:12:20 | INFO  | Task e5fe25b6-f246-46db-bfae-26ef50103f65 is in state STARTED 2026-02-28 01:12:20.621226 | orchestrator | 2026-02-28 01:12:20 | INFO  | Task 7e6ed415-85c6-4ae1-b254-732bd25a8ed0 is in state STARTED 2026-02-28 01:12:20.622433 | orchestrator | 2026-02-28 01:12:20 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:12:20.622635 | orchestrator | 2026-02-28 01:12:20 | INFO  | Task 0a31fa30-88d7-419e-9efd-0db2ebe25a0f is in state SUCCESS 2026-02-28 01:12:20.622848 | orchestrator | 2026-02-28 01:12:20 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:12:23.654311 | orchestrator | 2026-02-28 01:12:23 | INFO  | Task fd7ad46f-1ac3-4fcb-b16b-dca0c60c95c3 is in state STARTED 2026-02-28 01:12:23.654576 | orchestrator | 2026-02-28 01:12:23 | INFO  | Task e5fe25b6-f246-46db-bfae-26ef50103f65 is in state STARTED 2026-02-28 01:12:23.656372 | orchestrator | 2026-02-28 01:12:23 | INFO  | Task 7e6ed415-85c6-4ae1-b254-732bd25a8ed0 is in state STARTED 2026-02-28 01:12:23.657433 | orchestrator | 2026-02-28 01:12:23 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:12:23.657490 | orchestrator | 2026-02-28 01:12:23 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:12:26.692795 | orchestrator | 2026-02-28 01:12:26 | INFO  | Task fd7ad46f-1ac3-4fcb-b16b-dca0c60c95c3 is in state STARTED 2026-02-28 01:12:26.693252 | orchestrator | 2026-02-28 01:12:26 | INFO  | Task e5fe25b6-f246-46db-bfae-26ef50103f65 is in state STARTED 2026-02-28 01:12:26.694269 | orchestrator | 2026-02-28 01:12:26 | INFO  | Task 7e6ed415-85c6-4ae1-b254-732bd25a8ed0 is in state STARTED 2026-02-28 01:12:26.695302 | orchestrator | 2026-02-28 01:12:26 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:12:26.695348 | orchestrator | 2026-02-28 01:12:26 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:12:29.721577 | orchestrator | 2026-02-28 01:12:29 | INFO  | Task fd7ad46f-1ac3-4fcb-b16b-dca0c60c95c3 is in state STARTED 2026-02-28 01:12:29.722243 | orchestrator | 2026-02-28 01:12:29 | INFO  | Task e5fe25b6-f246-46db-bfae-26ef50103f65 is in state STARTED 2026-02-28 01:12:29.723470 | orchestrator | 2026-02-28 01:12:29 | INFO  | Task 7e6ed415-85c6-4ae1-b254-732bd25a8ed0 is in state STARTED 2026-02-28 01:12:29.724726 | orchestrator | 2026-02-28 01:12:29 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:12:29.725148 | orchestrator | 2026-02-28 01:12:29 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:12:32.751146 | orchestrator | 2026-02-28 01:12:32 | INFO  | Task fd7ad46f-1ac3-4fcb-b16b-dca0c60c95c3 is in state STARTED 2026-02-28 01:12:32.752151 | orchestrator | 2026-02-28 01:12:32 | INFO  | Task e5fe25b6-f246-46db-bfae-26ef50103f65 is in state STARTED 2026-02-28 01:12:32.754310 | orchestrator | 2026-02-28 01:12:32 | INFO  | Task 7e6ed415-85c6-4ae1-b254-732bd25a8ed0 is in state STARTED 2026-02-28 01:12:32.755330 | orchestrator | 2026-02-28 01:12:32 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:12:32.755370 | orchestrator | 2026-02-28 01:12:32 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:12:35.789225 | orchestrator | 2026-02-28 01:12:35 | INFO  | Task fd7ad46f-1ac3-4fcb-b16b-dca0c60c95c3 is in state STARTED 2026-02-28 01:12:35.789557 | orchestrator | 2026-02-28 01:12:35 | INFO  | Task e5fe25b6-f246-46db-bfae-26ef50103f65 is in state STARTED 2026-02-28 01:12:35.790770 | orchestrator | 2026-02-28 01:12:35 | INFO  | Task 7e6ed415-85c6-4ae1-b254-732bd25a8ed0 is in state STARTED 2026-02-28 01:12:35.791442 | orchestrator | 2026-02-28 01:12:35 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:12:35.791629 | orchestrator | 2026-02-28 01:12:35 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:12:38.833411 | orchestrator | 2026-02-28 01:12:38 | INFO  | Task fd7ad46f-1ac3-4fcb-b16b-dca0c60c95c3 is in state STARTED 2026-02-28 01:12:38.835549 | orchestrator | 2026-02-28 01:12:38 | INFO  | Task e5fe25b6-f246-46db-bfae-26ef50103f65 is in state STARTED 2026-02-28 01:12:38.839029 | orchestrator | 2026-02-28 01:12:38 | INFO  | Task 7e6ed415-85c6-4ae1-b254-732bd25a8ed0 is in state STARTED 2026-02-28 01:12:38.840302 | orchestrator | 2026-02-28 01:12:38 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:12:38.840683 | orchestrator | 2026-02-28 01:12:38 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:12:41.877235 | orchestrator | 2026-02-28 01:12:41 | INFO  | Task fd7ad46f-1ac3-4fcb-b16b-dca0c60c95c3 is in state STARTED 2026-02-28 01:12:41.877966 | orchestrator | 2026-02-28 01:12:41 | INFO  | Task e5fe25b6-f246-46db-bfae-26ef50103f65 is in state STARTED 2026-02-28 01:12:41.879268 | orchestrator | 2026-02-28 01:12:41 | INFO  | Task 7e6ed415-85c6-4ae1-b254-732bd25a8ed0 is in state STARTED 2026-02-28 01:12:41.880284 | orchestrator | 2026-02-28 01:12:41 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:12:41.880602 | orchestrator | 2026-02-28 01:12:41 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:12:44.915983 | orchestrator | 2026-02-28 01:12:44 | INFO  | Task fd7ad46f-1ac3-4fcb-b16b-dca0c60c95c3 is in state STARTED 2026-02-28 01:12:44.916806 | orchestrator | 2026-02-28 01:12:44 | INFO  | Task e5fe25b6-f246-46db-bfae-26ef50103f65 is in state STARTED 2026-02-28 01:12:44.917610 | orchestrator | 2026-02-28 01:12:44 | INFO  | Task 7e6ed415-85c6-4ae1-b254-732bd25a8ed0 is in state STARTED 2026-02-28 01:12:44.918563 | orchestrator | 2026-02-28 01:12:44 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:12:44.918732 | orchestrator | 2026-02-28 01:12:44 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:12:47.958152 | orchestrator | 2026-02-28 01:12:47 | INFO  | Task fd7ad46f-1ac3-4fcb-b16b-dca0c60c95c3 is in state STARTED 2026-02-28 01:12:47.958804 | orchestrator | 2026-02-28 01:12:47 | INFO  | Task e5fe25b6-f246-46db-bfae-26ef50103f65 is in state STARTED 2026-02-28 01:12:47.963685 | orchestrator | 2026-02-28 01:12:47 | INFO  | Task 7e6ed415-85c6-4ae1-b254-732bd25a8ed0 is in state STARTED 2026-02-28 01:12:47.966855 | orchestrator | 2026-02-28 01:12:47 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:12:47.966916 | orchestrator | 2026-02-28 01:12:47 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:12:51.001724 | orchestrator | 2026-02-28 01:12:51 | INFO  | Task fd7ad46f-1ac3-4fcb-b16b-dca0c60c95c3 is in state STARTED 2026-02-28 01:12:51.003263 | orchestrator | 2026-02-28 01:12:51 | INFO  | Task e5fe25b6-f246-46db-bfae-26ef50103f65 is in state STARTED 2026-02-28 01:12:51.005301 | orchestrator | 2026-02-28 01:12:51 | INFO  | Task 7e6ed415-85c6-4ae1-b254-732bd25a8ed0 is in state STARTED 2026-02-28 01:12:51.006910 | orchestrator | 2026-02-28 01:12:51 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:12:51.006949 | orchestrator | 2026-02-28 01:12:51 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:12:54.047453 | orchestrator | 2026-02-28 01:12:54 | INFO  | Task fd7ad46f-1ac3-4fcb-b16b-dca0c60c95c3 is in state STARTED 2026-02-28 01:12:54.050735 | orchestrator | 2026-02-28 01:12:54 | INFO  | Task e5fe25b6-f246-46db-bfae-26ef50103f65 is in state STARTED 2026-02-28 01:12:54.054147 | orchestrator | 2026-02-28 01:12:54 | INFO  | Task 7e6ed415-85c6-4ae1-b254-732bd25a8ed0 is in state STARTED 2026-02-28 01:12:54.055766 | orchestrator | 2026-02-28 01:12:54 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:12:54.055828 | orchestrator | 2026-02-28 01:12:54 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:12:57.106690 | orchestrator | 2026-02-28 01:12:57 | INFO  | Task fd7ad46f-1ac3-4fcb-b16b-dca0c60c95c3 is in state STARTED 2026-02-28 01:12:57.107848 | orchestrator | 2026-02-28 01:12:57 | INFO  | Task e5fe25b6-f246-46db-bfae-26ef50103f65 is in state STARTED 2026-02-28 01:12:57.109446 | orchestrator | 2026-02-28 01:12:57 | INFO  | Task 7e6ed415-85c6-4ae1-b254-732bd25a8ed0 is in state STARTED 2026-02-28 01:12:57.111612 | orchestrator | 2026-02-28 01:12:57 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:12:57.111662 | orchestrator | 2026-02-28 01:12:57 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:13:00.177354 | orchestrator | 2026-02-28 01:13:00 | INFO  | Task fd7ad46f-1ac3-4fcb-b16b-dca0c60c95c3 is in state STARTED 2026-02-28 01:13:00.179410 | orchestrator | 2026-02-28 01:13:00 | INFO  | Task e5fe25b6-f246-46db-bfae-26ef50103f65 is in state STARTED 2026-02-28 01:13:00.180694 | orchestrator | 2026-02-28 01:13:00 | INFO  | Task 7e6ed415-85c6-4ae1-b254-732bd25a8ed0 is in state STARTED 2026-02-28 01:13:00.182052 | orchestrator | 2026-02-28 01:13:00 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:13:00.182084 | orchestrator | 2026-02-28 01:13:00 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:13:03.235640 | orchestrator | 2026-02-28 01:13:03 | INFO  | Task fd7ad46f-1ac3-4fcb-b16b-dca0c60c95c3 is in state STARTED 2026-02-28 01:13:03.236884 | orchestrator | 2026-02-28 01:13:03 | INFO  | Task e5fe25b6-f246-46db-bfae-26ef50103f65 is in state STARTED 2026-02-28 01:13:03.238353 | orchestrator | 2026-02-28 01:13:03 | INFO  | Task 7e6ed415-85c6-4ae1-b254-732bd25a8ed0 is in state STARTED 2026-02-28 01:13:03.240757 | orchestrator | 2026-02-28 01:13:03 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:13:03.240821 | orchestrator | 2026-02-28 01:13:03 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:13:06.279387 | orchestrator | 2026-02-28 01:13:06 | INFO  | Task fd7ad46f-1ac3-4fcb-b16b-dca0c60c95c3 is in state STARTED 2026-02-28 01:13:06.279485 | orchestrator | 2026-02-28 01:13:06 | INFO  | Task e5fe25b6-f246-46db-bfae-26ef50103f65 is in state STARTED 2026-02-28 01:13:06.280495 | orchestrator | 2026-02-28 01:13:06 | INFO  | Task 7e6ed415-85c6-4ae1-b254-732bd25a8ed0 is in state STARTED 2026-02-28 01:13:06.282194 | orchestrator | 2026-02-28 01:13:06 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:13:06.282307 | orchestrator | 2026-02-28 01:13:06 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:13:09.316686 | orchestrator | 2026-02-28 01:13:09 | INFO  | Task fd7ad46f-1ac3-4fcb-b16b-dca0c60c95c3 is in state STARTED 2026-02-28 01:13:09.316930 | orchestrator | 2026-02-28 01:13:09 | INFO  | Task e5fe25b6-f246-46db-bfae-26ef50103f65 is in state STARTED 2026-02-28 01:13:09.317736 | orchestrator | 2026-02-28 01:13:09 | INFO  | Task 7e6ed415-85c6-4ae1-b254-732bd25a8ed0 is in state STARTED 2026-02-28 01:13:09.319107 | orchestrator | 2026-02-28 01:13:09 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:13:09.319200 | orchestrator | 2026-02-28 01:13:09 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:13:12.361767 | orchestrator | 2026-02-28 01:13:12 | INFO  | Task fd7ad46f-1ac3-4fcb-b16b-dca0c60c95c3 is in state STARTED 2026-02-28 01:13:12.363588 | orchestrator | 2026-02-28 01:13:12 | INFO  | Task e5fe25b6-f246-46db-bfae-26ef50103f65 is in state STARTED 2026-02-28 01:13:12.366324 | orchestrator | 2026-02-28 01:13:12 | INFO  | Task 7e6ed415-85c6-4ae1-b254-732bd25a8ed0 is in state STARTED 2026-02-28 01:13:12.368155 | orchestrator | 2026-02-28 01:13:12 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:13:12.368212 | orchestrator | 2026-02-28 01:13:12 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:13:15.407110 | orchestrator | 2026-02-28 01:13:15 | INFO  | Task fd7ad46f-1ac3-4fcb-b16b-dca0c60c95c3 is in state STARTED 2026-02-28 01:13:15.407193 | orchestrator | 2026-02-28 01:13:15 | INFO  | Task e5fe25b6-f246-46db-bfae-26ef50103f65 is in state STARTED 2026-02-28 01:13:15.407752 | orchestrator | 2026-02-28 01:13:15 | INFO  | Task 7e6ed415-85c6-4ae1-b254-732bd25a8ed0 is in state STARTED 2026-02-28 01:13:15.408855 | orchestrator | 2026-02-28 01:13:15 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:13:15.408905 | orchestrator | 2026-02-28 01:13:15 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:13:18.446312 | orchestrator | 2026-02-28 01:13:18 | INFO  | Task fd7ad46f-1ac3-4fcb-b16b-dca0c60c95c3 is in state STARTED 2026-02-28 01:13:18.447268 | orchestrator | 2026-02-28 01:13:18 | INFO  | Task e5fe25b6-f246-46db-bfae-26ef50103f65 is in state STARTED 2026-02-28 01:13:18.448790 | orchestrator | 2026-02-28 01:13:18 | INFO  | Task 7e6ed415-85c6-4ae1-b254-732bd25a8ed0 is in state STARTED 2026-02-28 01:13:18.450080 | orchestrator | 2026-02-28 01:13:18 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:13:18.450158 | orchestrator | 2026-02-28 01:13:18 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:13:21.503732 | orchestrator | 2026-02-28 01:13:21 | INFO  | Task fd7ad46f-1ac3-4fcb-b16b-dca0c60c95c3 is in state STARTED 2026-02-28 01:13:21.505375 | orchestrator | 2026-02-28 01:13:21 | INFO  | Task e5fe25b6-f246-46db-bfae-26ef50103f65 is in state STARTED 2026-02-28 01:13:21.507497 | orchestrator | 2026-02-28 01:13:21 | INFO  | Task 7e6ed415-85c6-4ae1-b254-732bd25a8ed0 is in state STARTED 2026-02-28 01:13:21.511140 | orchestrator | 2026-02-28 01:13:21 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:13:21.511212 | orchestrator | 2026-02-28 01:13:21 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:13:24.566457 | orchestrator | 2026-02-28 01:13:24 | INFO  | Task fd7ad46f-1ac3-4fcb-b16b-dca0c60c95c3 is in state STARTED 2026-02-28 01:13:24.569674 | orchestrator | 2026-02-28 01:13:24 | INFO  | Task e5fe25b6-f246-46db-bfae-26ef50103f65 is in state STARTED 2026-02-28 01:13:24.572517 | orchestrator | 2026-02-28 01:13:24 | INFO  | Task 7e6ed415-85c6-4ae1-b254-732bd25a8ed0 is in state STARTED 2026-02-28 01:13:24.575602 | orchestrator | 2026-02-28 01:13:24 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:13:24.576151 | orchestrator | 2026-02-28 01:13:24 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:13:27.626210 | orchestrator | 2026-02-28 01:13:27 | INFO  | Task fd7ad46f-1ac3-4fcb-b16b-dca0c60c95c3 is in state STARTED 2026-02-28 01:13:27.626732 | orchestrator | 2026-02-28 01:13:27 | INFO  | Task e5fe25b6-f246-46db-bfae-26ef50103f65 is in state STARTED 2026-02-28 01:13:27.629040 | orchestrator | 2026-02-28 01:13:27 | INFO  | Task 7e6ed415-85c6-4ae1-b254-732bd25a8ed0 is in state STARTED 2026-02-28 01:13:27.631026 | orchestrator | 2026-02-28 01:13:27 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:13:27.631228 | orchestrator | 2026-02-28 01:13:27 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:13:30.666764 | orchestrator | 2026-02-28 01:13:30 | INFO  | Task fd7ad46f-1ac3-4fcb-b16b-dca0c60c95c3 is in state STARTED 2026-02-28 01:13:30.671215 | orchestrator | 2026-02-28 01:13:30 | INFO  | Task e5fe25b6-f246-46db-bfae-26ef50103f65 is in state STARTED 2026-02-28 01:13:30.673983 | orchestrator | 2026-02-28 01:13:30 | INFO  | Task 7e6ed415-85c6-4ae1-b254-732bd25a8ed0 is in state STARTED 2026-02-28 01:13:30.676308 | orchestrator | 2026-02-28 01:13:30 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:13:30.676828 | orchestrator | 2026-02-28 01:13:30 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:13:33.717387 | orchestrator | 2026-02-28 01:13:33 | INFO  | Task fd7ad46f-1ac3-4fcb-b16b-dca0c60c95c3 is in state STARTED 2026-02-28 01:13:33.719670 | orchestrator | 2026-02-28 01:13:33 | INFO  | Task e5fe25b6-f246-46db-bfae-26ef50103f65 is in state STARTED 2026-02-28 01:13:33.720714 | orchestrator | 2026-02-28 01:13:33 | INFO  | Task 7e6ed415-85c6-4ae1-b254-732bd25a8ed0 is in state STARTED 2026-02-28 01:13:33.722296 | orchestrator | 2026-02-28 01:13:33 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:13:33.722319 | orchestrator | 2026-02-28 01:13:33 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:13:36.759786 | orchestrator | 2026-02-28 01:13:36 | INFO  | Task fd7ad46f-1ac3-4fcb-b16b-dca0c60c95c3 is in state STARTED 2026-02-28 01:13:36.761254 | orchestrator | 2026-02-28 01:13:36 | INFO  | Task e5fe25b6-f246-46db-bfae-26ef50103f65 is in state STARTED 2026-02-28 01:13:36.762869 | orchestrator | 2026-02-28 01:13:36 | INFO  | Task 7e6ed415-85c6-4ae1-b254-732bd25a8ed0 is in state STARTED 2026-02-28 01:13:36.764689 | orchestrator | 2026-02-28 01:13:36 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:13:36.764733 | orchestrator | 2026-02-28 01:13:36 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:13:39.802467 | orchestrator | 2026-02-28 01:13:39 | INFO  | Task fd7ad46f-1ac3-4fcb-b16b-dca0c60c95c3 is in state STARTED 2026-02-28 01:13:39.804763 | orchestrator | 2026-02-28 01:13:39 | INFO  | Task e5fe25b6-f246-46db-bfae-26ef50103f65 is in state STARTED 2026-02-28 01:13:39.806847 | orchestrator | 2026-02-28 01:13:39 | INFO  | Task 7e6ed415-85c6-4ae1-b254-732bd25a8ed0 is in state STARTED 2026-02-28 01:13:39.808683 | orchestrator | 2026-02-28 01:13:39 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:13:39.808717 | orchestrator | 2026-02-28 01:13:39 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:13:42.859072 | orchestrator | 2026-02-28 01:13:42 | INFO  | Task fd7ad46f-1ac3-4fcb-b16b-dca0c60c95c3 is in state STARTED 2026-02-28 01:13:42.859198 | orchestrator | 2026-02-28 01:13:42 | INFO  | Task e5fe25b6-f246-46db-bfae-26ef50103f65 is in state SUCCESS 2026-02-28 01:13:42.859264 | orchestrator | 2026-02-28 01:13:42.859272 | orchestrator | 2026-02-28 01:13:42.859277 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-28 01:13:42.859315 | orchestrator | 2026-02-28 01:13:42.859324 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-28 01:13:42.859333 | orchestrator | Saturday 28 February 2026 01:12:16 +0000 (0:00:00.200) 0:00:00.200 ***** 2026-02-28 01:13:42.859341 | orchestrator | ok: [testbed-node-0] 2026-02-28 01:13:42.859351 | orchestrator | ok: [testbed-node-1] 2026-02-28 01:13:42.859358 | orchestrator | ok: [testbed-node-2] 2026-02-28 01:13:42.859366 | orchestrator | 2026-02-28 01:13:42.859374 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-28 01:13:42.859381 | orchestrator | Saturday 28 February 2026 01:12:17 +0000 (0:00:00.565) 0:00:00.766 ***** 2026-02-28 01:13:42.859390 | orchestrator | ok: [testbed-node-0] => (item=enable_nova_True) 2026-02-28 01:13:42.859396 | orchestrator | ok: [testbed-node-1] => (item=enable_nova_True) 2026-02-28 01:13:42.859401 | orchestrator | ok: [testbed-node-2] => (item=enable_nova_True) 2026-02-28 01:13:42.859406 | orchestrator | 2026-02-28 01:13:42.859410 | orchestrator | PLAY [Wait for the Nova service] *********************************************** 2026-02-28 01:13:42.859415 | orchestrator | 2026-02-28 01:13:42.859420 | orchestrator | TASK [Waiting for Nova public port to be UP] *********************************** 2026-02-28 01:13:42.859437 | orchestrator | Saturday 28 February 2026 01:12:18 +0000 (0:00:01.323) 0:00:02.089 ***** 2026-02-28 01:13:42.859442 | orchestrator | ok: [testbed-node-1] 2026-02-28 01:13:42.859447 | orchestrator | ok: [testbed-node-0] 2026-02-28 01:13:42.859452 | orchestrator | ok: [testbed-node-2] 2026-02-28 01:13:42.859456 | orchestrator | 2026-02-28 01:13:42.859461 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-28 01:13:42.859467 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-28 01:13:42.859474 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-28 01:13:42.859479 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-28 01:13:42.859484 | orchestrator | 2026-02-28 01:13:42.859489 | orchestrator | 2026-02-28 01:13:42.859494 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-28 01:13:42.859499 | orchestrator | Saturday 28 February 2026 01:12:19 +0000 (0:00:01.155) 0:00:03.244 ***** 2026-02-28 01:13:42.859504 | orchestrator | =============================================================================== 2026-02-28 01:13:42.859508 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.32s 2026-02-28 01:13:42.859513 | orchestrator | Waiting for Nova public port to be UP ----------------------------------- 1.16s 2026-02-28 01:13:42.859518 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.57s 2026-02-28 01:13:42.859523 | orchestrator | 2026-02-28 01:13:42.860914 | orchestrator | 2026-02-28 01:13:42.860965 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-28 01:13:42.860975 | orchestrator | 2026-02-28 01:13:42.860982 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-28 01:13:42.860990 | orchestrator | Saturday 28 February 2026 01:11:17 +0000 (0:00:00.861) 0:00:00.861 ***** 2026-02-28 01:13:42.860999 | orchestrator | ok: [testbed-node-0] 2026-02-28 01:13:42.861008 | orchestrator | ok: [testbed-node-1] 2026-02-28 01:13:42.861016 | orchestrator | ok: [testbed-node-2] 2026-02-28 01:13:42.861024 | orchestrator | 2026-02-28 01:13:42.861033 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-28 01:13:42.861041 | orchestrator | Saturday 28 February 2026 01:11:17 +0000 (0:00:00.429) 0:00:01.290 ***** 2026-02-28 01:13:42.861049 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2026-02-28 01:13:42.861058 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2026-02-28 01:13:42.861065 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2026-02-28 01:13:42.861073 | orchestrator | 2026-02-28 01:13:42.861080 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2026-02-28 01:13:42.861231 | orchestrator | 2026-02-28 01:13:42.861243 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-02-28 01:13:42.861295 | orchestrator | Saturday 28 February 2026 01:11:18 +0000 (0:00:00.583) 0:00:01.874 ***** 2026-02-28 01:13:42.861334 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 01:13:42.861371 | orchestrator | 2026-02-28 01:13:42.861380 | orchestrator | TASK [service-ks-register : magnum | Creating/deleting services] *************** 2026-02-28 01:13:42.861390 | orchestrator | Saturday 28 February 2026 01:11:19 +0000 (0:00:00.724) 0:00:02.598 ***** 2026-02-28 01:13:42.861402 | orchestrator | changed: [testbed-node-0] => (item=magnum (container-infra)) 2026-02-28 01:13:42.861410 | orchestrator | 2026-02-28 01:13:42.861417 | orchestrator | TASK [service-ks-register : magnum | Creating/deleting endpoints] ************** 2026-02-28 01:13:42.861424 | orchestrator | Saturday 28 February 2026 01:11:23 +0000 (0:00:03.968) 0:00:06.567 ***** 2026-02-28 01:13:42.861432 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api-int.testbed.osism.xyz:9511/v1 -> internal) 2026-02-28 01:13:42.861439 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api.testbed.osism.xyz:9511/v1 -> public) 2026-02-28 01:13:42.861448 | orchestrator | 2026-02-28 01:13:42.861456 | orchestrator | TASK [service-ks-register : magnum | Creating projects] ************************ 2026-02-28 01:13:42.861464 | orchestrator | Saturday 28 February 2026 01:11:30 +0000 (0:00:07.562) 0:00:14.130 ***** 2026-02-28 01:13:42.861472 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-28 01:13:42.861480 | orchestrator | 2026-02-28 01:13:42.861488 | orchestrator | TASK [service-ks-register : magnum | Creating users] *************************** 2026-02-28 01:13:42.861496 | orchestrator | Saturday 28 February 2026 01:11:34 +0000 (0:00:03.757) 0:00:17.887 ***** 2026-02-28 01:13:42.861504 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service) 2026-02-28 01:13:42.861510 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-28 01:13:42.861516 | orchestrator | 2026-02-28 01:13:42.861521 | orchestrator | TASK [service-ks-register : magnum | Creating roles] *************************** 2026-02-28 01:13:42.861527 | orchestrator | Saturday 28 February 2026 01:11:38 +0000 (0:00:04.350) 0:00:22.238 ***** 2026-02-28 01:13:42.861533 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-28 01:13:42.861538 | orchestrator | 2026-02-28 01:13:42.861567 | orchestrator | TASK [service-ks-register : magnum | Granting/revoking user roles] ************* 2026-02-28 01:13:42.861574 | orchestrator | Saturday 28 February 2026 01:11:42 +0000 (0:00:03.730) 0:00:25.969 ***** 2026-02-28 01:13:42.861579 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service -> admin) 2026-02-28 01:13:42.861585 | orchestrator | 2026-02-28 01:13:42.861591 | orchestrator | TASK [magnum : Creating Magnum trustee domain] ********************************* 2026-02-28 01:13:42.861597 | orchestrator | Saturday 28 February 2026 01:11:47 +0000 (0:00:04.401) 0:00:30.370 ***** 2026-02-28 01:13:42.861603 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:13:42.861609 | orchestrator | 2026-02-28 01:13:42.861614 | orchestrator | TASK [magnum : Creating Magnum trustee user] *********************************** 2026-02-28 01:13:42.861632 | orchestrator | Saturday 28 February 2026 01:11:50 +0000 (0:00:03.649) 0:00:34.020 ***** 2026-02-28 01:13:42.861637 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:13:42.861643 | orchestrator | 2026-02-28 01:13:42.861648 | orchestrator | TASK [magnum : Creating Magnum trustee user role] ****************************** 2026-02-28 01:13:42.861654 | orchestrator | Saturday 28 February 2026 01:11:55 +0000 (0:00:04.489) 0:00:38.509 ***** 2026-02-28 01:13:42.861660 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:13:42.861665 | orchestrator | 2026-02-28 01:13:42.861671 | orchestrator | TASK [magnum : Ensuring config directories exist] ****************************** 2026-02-28 01:13:42.861676 | orchestrator | Saturday 28 February 2026 01:11:58 +0000 (0:00:03.804) 0:00:42.314 ***** 2026-02-28 01:13:42.861703 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-28 01:13:42.861722 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-28 01:13:42.861729 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-28 01:13:42.861739 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-28 01:13:42.861746 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-28 01:13:42.861764 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-28 01:13:42.861773 | orchestrator | 2026-02-28 01:13:42.861781 | orchestrator | TASK [magnum : Check if policies shall be overwritten] ************************* 2026-02-28 01:13:42.861790 | orchestrator | Saturday 28 February 2026 01:12:01 +0000 (0:00:02.611) 0:00:44.925 ***** 2026-02-28 01:13:42.861798 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:13:42.861806 | orchestrator | 2026-02-28 01:13:42.861814 | orchestrator | TASK [magnum : Set magnum policy file] ***************************************** 2026-02-28 01:13:42.861821 | orchestrator | Saturday 28 February 2026 01:12:01 +0000 (0:00:00.159) 0:00:45.085 ***** 2026-02-28 01:13:42.861830 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:13:42.861839 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:13:42.861848 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:13:42.861858 | orchestrator | 2026-02-28 01:13:42.861863 | orchestrator | TASK [magnum : Check if kubeconfig file is supplied] *************************** 2026-02-28 01:13:42.861868 | orchestrator | Saturday 28 February 2026 01:12:02 +0000 (0:00:00.915) 0:00:46.000 ***** 2026-02-28 01:13:42.861872 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-28 01:13:42.861877 | orchestrator | 2026-02-28 01:13:42.861882 | orchestrator | TASK [magnum : Copying over kubeconfig file] *********************************** 2026-02-28 01:13:42.861887 | orchestrator | Saturday 28 February 2026 01:12:03 +0000 (0:00:01.172) 0:00:47.173 ***** 2026-02-28 01:13:42.861892 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-28 01:13:42.861902 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-28 01:13:42.861918 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-28 01:13:42.861924 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-28 01:13:42.861930 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-28 01:13:42.861935 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-28 01:13:42.861940 | orchestrator | 2026-02-28 01:13:42.861945 | orchestrator | TASK [magnum : Set magnum kubeconfig file's path] ****************************** 2026-02-28 01:13:42.861950 | orchestrator | Saturday 28 February 2026 01:12:07 +0000 (0:00:03.642) 0:00:50.816 ***** 2026-02-28 01:13:42.861962 | orchestrator | ok: [testbed-node-0] 2026-02-28 01:13:42.861974 | orchestrator | ok: [testbed-node-1] 2026-02-28 01:13:42.861983 | orchestrator | ok: [testbed-node-2] 2026-02-28 01:13:42.861990 | orchestrator | 2026-02-28 01:13:42.861998 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-02-28 01:13:42.862006 | orchestrator | Saturday 28 February 2026 01:12:08 +0000 (0:00:01.029) 0:00:51.845 ***** 2026-02-28 01:13:42.862071 | orchestrator | included: /ansible/roles/magnum/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 01:13:42.862084 | orchestrator | 2026-02-28 01:13:42.862092 | orchestrator | TASK [service-cert-copy : magnum | Copying over extra CA certificates] ********* 2026-02-28 01:13:42.862099 | orchestrator | Saturday 28 February 2026 01:12:10 +0000 (0:00:01.983) 0:00:53.828 ***** 2026-02-28 01:13:42.862116 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-28 01:13:42.862122 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-28 01:13:42.862128 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-28 01:13:42.862134 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-28 01:13:42.862149 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-28 01:13:42.862159 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-28 01:13:42.862164 | orchestrator | 2026-02-28 01:13:42.862169 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS certificate] *** 2026-02-28 01:13:42.862174 | orchestrator | Saturday 28 February 2026 01:12:13 +0000 (0:00:03.229) 0:00:57.057 ***** 2026-02-28 01:13:42.862179 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-02-28 01:13:42.862184 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-28 01:13:42.862195 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:13:42.862208 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-02-28 01:13:42.862213 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-28 01:13:42.862218 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:13:42.862228 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-02-28 01:13:42.862234 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-28 01:13:42.862239 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:13:42.862244 | orchestrator | 2026-02-28 01:13:42.862249 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS key] ****** 2026-02-28 01:13:42.862257 | orchestrator | Saturday 28 February 2026 01:12:15 +0000 (0:00:01.903) 0:00:58.961 ***** 2026-02-28 01:13:42.862264 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-02-28 01:13:42.862277 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-28 01:13:42.862285 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:13:42.862300 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-02-28 01:13:42.862309 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-02-28 01:13:42.862324 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-28 01:13:42.862331 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:13:42.862342 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-28 01:13:42.862350 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:13:42.862356 | orchestrator | 2026-02-28 01:13:42.862363 | orchestrator | TASK [magnum : Copying over config.json files for services] ******************** 2026-02-28 01:13:42.862371 | orchestrator | Saturday 28 February 2026 01:12:17 +0000 (0:00:02.265) 0:01:01.227 ***** 2026-02-28 01:13:42.862637 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-28 01:13:42.862720 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-28 01:13:42.862733 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-28 01:13:42.862766 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-28 01:13:42.862786 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-28 01:13:42.862811 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-28 01:13:42.862836 | orchestrator | 2026-02-28 01:13:42.862846 | orchestrator | TASK [magnum : Copying over magnum.conf] *************************************** 2026-02-28 01:13:42.862856 | orchestrator | Saturday 28 February 2026 01:12:21 +0000 (0:00:03.376) 0:01:04.603 ***** 2026-02-28 01:13:42.862865 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-28 01:13:42.862881 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-28 01:13:42.862895 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-28 01:13:42.862911 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-28 01:13:42.862920 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-28 01:13:42.862934 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-28 01:13:42.862943 | orchestrator | 2026-02-28 01:13:42.862951 | orchestrator | TASK [magnum : Copying over existing policy file] ****************************** 2026-02-28 01:13:42.862959 | orchestrator | Saturday 28 February 2026 01:12:31 +0000 (0:00:10.146) 0:01:14.750 ***** 2026-02-28 01:13:42.862967 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-02-28 01:13:42.862980 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-02-28 01:13:42.862996 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-28 01:13:42.863005 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-28 01:13:42.863018 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:13:42.863027 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:13:42.863035 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-02-28 01:13:42.863049 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-28 01:13:42.863057 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:13:42.863065 | orchestrator | 2026-02-28 01:13:42.863074 | orchestrator | TASK [service-check-containers : magnum | Check containers] ******************** 2026-02-28 01:13:42.863082 | orchestrator | Saturday 28 February 2026 01:12:32 +0000 (0:00:00.845) 0:01:15.595 ***** 2026-02-28 01:13:42.863102 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-28 01:13:42.863119 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-28 01:13:42.863144 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-28 01:13:42.863161 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-28 01:13:42.863182 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-28 01:13:42.863204 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-28 01:13:42.863222 | orchestrator | 2026-02-28 01:13:42.863232 | orchestrator | TASK [service-check-containers : magnum | Notify handlers to restart containers] *** 2026-02-28 01:13:42.863243 | orchestrator | Saturday 28 February 2026 01:12:36 +0000 (0:00:03.990) 0:01:19.585 ***** 2026-02-28 01:13:42.863252 | orchestrator | changed: [testbed-node-0] => { 2026-02-28 01:13:42.863260 | orchestrator |  "msg": "Notifying handlers" 2026-02-28 01:13:42.863270 | orchestrator | } 2026-02-28 01:13:42.863280 | orchestrator | changed: [testbed-node-1] => { 2026-02-28 01:13:42.863289 | orchestrator |  "msg": "Notifying handlers" 2026-02-28 01:13:42.863298 | orchestrator | } 2026-02-28 01:13:42.863307 | orchestrator | changed: [testbed-node-2] => { 2026-02-28 01:13:42.863316 | orchestrator |  "msg": "Notifying handlers" 2026-02-28 01:13:42.863325 | orchestrator | } 2026-02-28 01:13:42.863335 | orchestrator | 2026-02-28 01:13:42.863344 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-28 01:13:42.863354 | orchestrator | Saturday 28 February 2026 01:12:37 +0000 (0:00:00.960) 0:01:20.546 ***** 2026-02-28 01:13:42.863364 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-02-28 01:13:42.863375 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-28 01:13:42.863384 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:13:42.863400 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-02-28 01:13:42.863419 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-28 01:13:42.863434 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:13:42.863444 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-02-28 01:13:42.863453 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-28 01:13:42.863461 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:13:42.863469 | orchestrator | 2026-02-28 01:13:42.863477 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-02-28 01:13:42.863485 | orchestrator | Saturday 28 February 2026 01:12:38 +0000 (0:00:01.310) 0:01:21.857 ***** 2026-02-28 01:13:42.863493 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:13:42.863501 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:13:42.863509 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:13:42.863517 | orchestrator | 2026-02-28 01:13:42.863524 | orchestrator | TASK [magnum : Creating Magnum database] *************************************** 2026-02-28 01:13:42.863532 | orchestrator | Saturday 28 February 2026 01:12:39 +0000 (0:00:00.650) 0:01:22.508 ***** 2026-02-28 01:13:42.863566 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:13:42.863576 | orchestrator | 2026-02-28 01:13:42.863585 | orchestrator | TASK [magnum : Creating Magnum database user and setting permissions] ********** 2026-02-28 01:13:42.863593 | orchestrator | Saturday 28 February 2026 01:12:41 +0000 (0:00:02.349) 0:01:24.857 ***** 2026-02-28 01:13:42.863600 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:13:42.863608 | orchestrator | 2026-02-28 01:13:42.863622 | orchestrator | TASK [magnum : Running Magnum bootstrap container] ***************************** 2026-02-28 01:13:42.863630 | orchestrator | Saturday 28 February 2026 01:12:44 +0000 (0:00:02.603) 0:01:27.460 ***** 2026-02-28 01:13:42.863638 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:13:42.863646 | orchestrator | 2026-02-28 01:13:42.863666 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-02-28 01:13:42.863676 | orchestrator | Saturday 28 February 2026 01:13:06 +0000 (0:00:22.607) 0:01:50.067 ***** 2026-02-28 01:13:42.863689 | orchestrator | 2026-02-28 01:13:42.863731 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-02-28 01:13:42.863746 | orchestrator | Saturday 28 February 2026 01:13:06 +0000 (0:00:00.090) 0:01:50.158 ***** 2026-02-28 01:13:42.863758 | orchestrator | 2026-02-28 01:13:42.863769 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-02-28 01:13:42.863781 | orchestrator | Saturday 28 February 2026 01:13:06 +0000 (0:00:00.075) 0:01:50.233 ***** 2026-02-28 01:13:42.863793 | orchestrator | 2026-02-28 01:13:42.863806 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-api container] ************************ 2026-02-28 01:13:42.863818 | orchestrator | Saturday 28 February 2026 01:13:06 +0000 (0:00:00.074) 0:01:50.308 ***** 2026-02-28 01:13:42.863830 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:13:42.863844 | orchestrator | changed: [testbed-node-2] 2026-02-28 01:13:42.863857 | orchestrator | changed: [testbed-node-1] 2026-02-28 01:13:42.863868 | orchestrator | 2026-02-28 01:13:42.863878 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-conductor container] ****************** 2026-02-28 01:13:42.863890 | orchestrator | Saturday 28 February 2026 01:13:28 +0000 (0:00:21.904) 0:02:12.213 ***** 2026-02-28 01:13:42.863902 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:13:42.863927 | orchestrator | changed: [testbed-node-1] 2026-02-28 01:13:42.863941 | orchestrator | changed: [testbed-node-2] 2026-02-28 01:13:42.863955 | orchestrator | 2026-02-28 01:13:42.863968 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-28 01:13:42.863981 | orchestrator | testbed-node-0 : ok=27  changed=19  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-28 01:13:42.863991 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-28 01:13:42.863999 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-28 01:13:42.864006 | orchestrator | 2026-02-28 01:13:42.864014 | orchestrator | 2026-02-28 01:13:42.864022 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-28 01:13:42.864030 | orchestrator | Saturday 28 February 2026 01:13:40 +0000 (0:00:11.999) 0:02:24.212 ***** 2026-02-28 01:13:42.864038 | orchestrator | =============================================================================== 2026-02-28 01:13:42.864051 | orchestrator | magnum : Running Magnum bootstrap container ---------------------------- 22.61s 2026-02-28 01:13:42.864063 | orchestrator | magnum : Restart magnum-api container ---------------------------------- 21.91s 2026-02-28 01:13:42.864077 | orchestrator | magnum : Restart magnum-conductor container ---------------------------- 12.00s 2026-02-28 01:13:42.864089 | orchestrator | magnum : Copying over magnum.conf -------------------------------------- 10.15s 2026-02-28 01:13:42.864102 | orchestrator | service-ks-register : magnum | Creating/deleting endpoints -------------- 7.56s 2026-02-28 01:13:42.864114 | orchestrator | magnum : Creating Magnum trustee user ----------------------------------- 4.49s 2026-02-28 01:13:42.864127 | orchestrator | service-ks-register : magnum | Granting/revoking user roles ------------- 4.40s 2026-02-28 01:13:42.864139 | orchestrator | service-ks-register : magnum | Creating users --------------------------- 4.35s 2026-02-28 01:13:42.864152 | orchestrator | service-check-containers : magnum | Check containers -------------------- 3.99s 2026-02-28 01:13:42.864163 | orchestrator | service-ks-register : magnum | Creating/deleting services --------------- 3.97s 2026-02-28 01:13:42.864174 | orchestrator | magnum : Creating Magnum trustee user role ------------------------------ 3.80s 2026-02-28 01:13:42.864187 | orchestrator | service-ks-register : magnum | Creating projects ------------------------ 3.76s 2026-02-28 01:13:42.864200 | orchestrator | service-ks-register : magnum | Creating roles --------------------------- 3.73s 2026-02-28 01:13:42.864230 | orchestrator | magnum : Creating Magnum trustee domain --------------------------------- 3.65s 2026-02-28 01:13:42.864242 | orchestrator | magnum : Copying over kubeconfig file ----------------------------------- 3.64s 2026-02-28 01:13:42.864255 | orchestrator | magnum : Copying over config.json files for services -------------------- 3.38s 2026-02-28 01:13:42.864268 | orchestrator | service-cert-copy : magnum | Copying over extra CA certificates --------- 3.23s 2026-02-28 01:13:42.864280 | orchestrator | magnum : Ensuring config directories exist ------------------------------ 2.61s 2026-02-28 01:13:42.864292 | orchestrator | magnum : Creating Magnum database user and setting permissions ---------- 2.60s 2026-02-28 01:13:42.864305 | orchestrator | magnum : Creating Magnum database --------------------------------------- 2.35s 2026-02-28 01:13:42.864325 | orchestrator | 2026-02-28 01:13:42 | INFO  | Task 7e6ed415-85c6-4ae1-b254-732bd25a8ed0 is in state STARTED 2026-02-28 01:13:42.866312 | orchestrator | 2026-02-28 01:13:42 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:13:42.866367 | orchestrator | 2026-02-28 01:13:42 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:13:45.923094 | orchestrator | 2026-02-28 01:13:45 | INFO  | Task fd7ad46f-1ac3-4fcb-b16b-dca0c60c95c3 is in state STARTED 2026-02-28 01:13:45.926060 | orchestrator | 2026-02-28 01:13:45 | INFO  | Task 7e6ed415-85c6-4ae1-b254-732bd25a8ed0 is in state STARTED 2026-02-28 01:13:45.928294 | orchestrator | 2026-02-28 01:13:45 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:13:45.929081 | orchestrator | 2026-02-28 01:13:45 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:13:48.978573 | orchestrator | 2026-02-28 01:13:48 | INFO  | Task fd7ad46f-1ac3-4fcb-b16b-dca0c60c95c3 is in state STARTED 2026-02-28 01:13:48.979864 | orchestrator | 2026-02-28 01:13:48 | INFO  | Task 7e6ed415-85c6-4ae1-b254-732bd25a8ed0 is in state STARTED 2026-02-28 01:13:48.983154 | orchestrator | 2026-02-28 01:13:48 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:13:48.983206 | orchestrator | 2026-02-28 01:13:48 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:13:52.015625 | orchestrator | 2026-02-28 01:13:52 | INFO  | Task fd7ad46f-1ac3-4fcb-b16b-dca0c60c95c3 is in state STARTED 2026-02-28 01:13:52.018011 | orchestrator | 2026-02-28 01:13:52 | INFO  | Task 7e6ed415-85c6-4ae1-b254-732bd25a8ed0 is in state STARTED 2026-02-28 01:13:52.019242 | orchestrator | 2026-02-28 01:13:52 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:13:52.019445 | orchestrator | 2026-02-28 01:13:52 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:13:55.081268 | orchestrator | 2026-02-28 01:13:55 | INFO  | Task fd7ad46f-1ac3-4fcb-b16b-dca0c60c95c3 is in state STARTED 2026-02-28 01:13:55.081346 | orchestrator | 2026-02-28 01:13:55 | INFO  | Task 7e6ed415-85c6-4ae1-b254-732bd25a8ed0 is in state STARTED 2026-02-28 01:13:55.081352 | orchestrator | 2026-02-28 01:13:55 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:13:55.081357 | orchestrator | 2026-02-28 01:13:55 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:13:58.116072 | orchestrator | 2026-02-28 01:13:58 | INFO  | Task fd7ad46f-1ac3-4fcb-b16b-dca0c60c95c3 is in state STARTED 2026-02-28 01:13:58.118382 | orchestrator | 2026-02-28 01:13:58 | INFO  | Task 7e6ed415-85c6-4ae1-b254-732bd25a8ed0 is in state SUCCESS 2026-02-28 01:13:58.119706 | orchestrator | 2026-02-28 01:13:58.119752 | orchestrator | 2026-02-28 01:13:58.119766 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-28 01:13:58.119779 | orchestrator | 2026-02-28 01:13:58.119790 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-28 01:13:58.119828 | orchestrator | Saturday 28 February 2026 01:12:17 +0000 (0:00:00.207) 0:00:00.207 ***** 2026-02-28 01:13:58.119840 | orchestrator | ok: [testbed-node-0] 2026-02-28 01:13:58.119853 | orchestrator | ok: [testbed-node-1] 2026-02-28 01:13:58.119864 | orchestrator | ok: [testbed-node-2] 2026-02-28 01:13:58.119875 | orchestrator | 2026-02-28 01:13:58.119886 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-28 01:13:58.119898 | orchestrator | Saturday 28 February 2026 01:12:17 +0000 (0:00:00.358) 0:00:00.566 ***** 2026-02-28 01:13:58.119909 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2026-02-28 01:13:58.119921 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2026-02-28 01:13:58.119932 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2026-02-28 01:13:58.119943 | orchestrator | 2026-02-28 01:13:58.119954 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2026-02-28 01:13:58.119977 | orchestrator | 2026-02-28 01:13:58.119988 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-02-28 01:13:58.119999 | orchestrator | Saturday 28 February 2026 01:12:18 +0000 (0:00:00.637) 0:00:01.203 ***** 2026-02-28 01:13:58.120011 | orchestrator | included: /ansible/roles/grafana/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 01:13:58.120023 | orchestrator | 2026-02-28 01:13:58.120034 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2026-02-28 01:13:58.120045 | orchestrator | Saturday 28 February 2026 01:12:19 +0000 (0:00:01.180) 0:00:02.384 ***** 2026-02-28 01:13:58.120059 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-28 01:13:58.120091 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-28 01:13:58.120104 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-28 01:13:58.120116 | orchestrator | 2026-02-28 01:13:58.120127 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2026-02-28 01:13:58.120148 | orchestrator | Saturday 28 February 2026 01:12:20 +0000 (0:00:01.197) 0:00:03.581 ***** 2026-02-28 01:13:58.120159 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-28 01:13:58.120171 | orchestrator | 2026-02-28 01:13:58.120182 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-02-28 01:13:58.120193 | orchestrator | Saturday 28 February 2026 01:12:21 +0000 (0:00:01.310) 0:00:04.892 ***** 2026-02-28 01:13:58.120213 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 01:13:58.120225 | orchestrator | 2026-02-28 01:13:58.120236 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2026-02-28 01:13:58.120261 | orchestrator | Saturday 28 February 2026 01:12:22 +0000 (0:00:01.025) 0:00:05.918 ***** 2026-02-28 01:13:58.120274 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-28 01:13:58.120286 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-28 01:13:58.120304 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-28 01:13:58.120316 | orchestrator | 2026-02-28 01:13:58.120328 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2026-02-28 01:13:58.120339 | orchestrator | Saturday 28 February 2026 01:12:25 +0000 (0:00:02.814) 0:00:08.732 ***** 2026-02-28 01:13:58.120350 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-02-28 01:13:58.120369 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:13:58.120381 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-02-28 01:13:58.120402 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-02-28 01:13:58.120414 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:13:58.120425 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:13:58.120436 | orchestrator | 2026-02-28 01:13:58.120447 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2026-02-28 01:13:58.120458 | orchestrator | Saturday 28 February 2026 01:12:26 +0000 (0:00:01.080) 0:00:09.813 ***** 2026-02-28 01:13:58.120469 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-02-28 01:13:58.120481 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:13:58.120498 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-02-28 01:13:58.120510 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:13:58.120521 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-02-28 01:13:58.120572 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:13:58.120585 | orchestrator | 2026-02-28 01:13:58.120596 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2026-02-28 01:13:58.120607 | orchestrator | Saturday 28 February 2026 01:12:29 +0000 (0:00:03.083) 0:00:12.896 ***** 2026-02-28 01:13:58.120626 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-28 01:13:58.120638 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-28 01:13:58.120650 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-28 01:13:58.120662 | orchestrator | 2026-02-28 01:13:58.120672 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2026-02-28 01:13:58.120683 | orchestrator | Saturday 28 February 2026 01:12:31 +0000 (0:00:01.795) 0:00:14.691 ***** 2026-02-28 01:13:58.120700 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-28 01:13:58.120719 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-28 01:13:58.120731 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-28 01:13:58.120742 | orchestrator | 2026-02-28 01:13:58.120753 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2026-02-28 01:13:58.120771 | orchestrator | Saturday 28 February 2026 01:12:33 +0000 (0:00:01.764) 0:00:16.456 ***** 2026-02-28 01:13:58.120782 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:13:58.120793 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:13:58.120804 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:13:58.120815 | orchestrator | 2026-02-28 01:13:58.120826 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2026-02-28 01:13:58.120838 | orchestrator | Saturday 28 February 2026 01:12:34 +0000 (0:00:01.294) 0:00:17.751 ***** 2026-02-28 01:13:58.120848 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-02-28 01:13:58.120859 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-02-28 01:13:58.120870 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-02-28 01:13:58.120881 | orchestrator | 2026-02-28 01:13:58.120892 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2026-02-28 01:13:58.120903 | orchestrator | Saturday 28 February 2026 01:12:36 +0000 (0:00:02.162) 0:00:19.914 ***** 2026-02-28 01:13:58.120914 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-02-28 01:13:58.120925 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-02-28 01:13:58.120936 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-02-28 01:13:58.120947 | orchestrator | 2026-02-28 01:13:58.120958 | orchestrator | TASK [grafana : Check if the folder for custom grafana dashboards exists] ****** 2026-02-28 01:13:58.120969 | orchestrator | Saturday 28 February 2026 01:12:38 +0000 (0:00:01.601) 0:00:21.516 ***** 2026-02-28 01:13:58.120980 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-28 01:13:58.120991 | orchestrator | 2026-02-28 01:13:58.121002 | orchestrator | TASK [grafana : Remove templated Grafana dashboards] *************************** 2026-02-28 01:13:58.121021 | orchestrator | Saturday 28 February 2026 01:12:39 +0000 (0:00:00.917) 0:00:22.434 ***** 2026-02-28 01:13:58.121039 | orchestrator | ok: [testbed-node-0] 2026-02-28 01:13:58.121072 | orchestrator | ok: [testbed-node-1] 2026-02-28 01:13:58.121090 | orchestrator | ok: [testbed-node-2] 2026-02-28 01:13:58.121107 | orchestrator | 2026-02-28 01:13:58.121124 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2026-02-28 01:13:58.121141 | orchestrator | Saturday 28 February 2026 01:12:39 +0000 (0:00:00.730) 0:00:23.164 ***** 2026-02-28 01:13:58.121159 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:13:58.121177 | orchestrator | changed: [testbed-node-1] 2026-02-28 01:13:58.121194 | orchestrator | changed: [testbed-node-2] 2026-02-28 01:13:58.121214 | orchestrator | 2026-02-28 01:13:58.121233 | orchestrator | TASK [service-check-containers : grafana | Check containers] ******************* 2026-02-28 01:13:58.121313 | orchestrator | Saturday 28 February 2026 01:12:41 +0000 (0:00:01.630) 0:00:24.795 ***** 2026-02-28 01:13:58.121335 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-28 01:13:58.121348 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-28 01:13:58.121372 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-28 01:13:58.121384 | orchestrator | 2026-02-28 01:13:58.121395 | orchestrator | TASK [service-check-containers : grafana | Notify handlers to restart containers] *** 2026-02-28 01:13:58.121406 | orchestrator | Saturday 28 February 2026 01:12:42 +0000 (0:00:01.222) 0:00:26.018 ***** 2026-02-28 01:13:58.121417 | orchestrator | changed: [testbed-node-0] => { 2026-02-28 01:13:58.121428 | orchestrator |  "msg": "Notifying handlers" 2026-02-28 01:13:58.121440 | orchestrator | } 2026-02-28 01:13:58.121451 | orchestrator | changed: [testbed-node-1] => { 2026-02-28 01:13:58.121462 | orchestrator |  "msg": "Notifying handlers" 2026-02-28 01:13:58.121473 | orchestrator | } 2026-02-28 01:13:58.121484 | orchestrator | changed: [testbed-node-2] => { 2026-02-28 01:13:58.121494 | orchestrator |  "msg": "Notifying handlers" 2026-02-28 01:13:58.121505 | orchestrator | } 2026-02-28 01:13:58.121516 | orchestrator | 2026-02-28 01:13:58.121527 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-28 01:13:58.121624 | orchestrator | Saturday 28 February 2026 01:12:43 +0000 (0:00:00.391) 0:00:26.410 ***** 2026-02-28 01:13:58.121645 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-02-28 01:13:58.121666 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:13:58.121693 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-02-28 01:13:58.121705 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:13:58.121717 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-02-28 01:13:58.121728 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:13:58.121739 | orchestrator | 2026-02-28 01:13:58.121750 | orchestrator | TASK [grafana : Creating grafana database] ************************************* 2026-02-28 01:13:58.121761 | orchestrator | Saturday 28 February 2026 01:12:44 +0000 (0:00:01.637) 0:00:28.047 ***** 2026-02-28 01:13:58.121772 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:13:58.121783 | orchestrator | 2026-02-28 01:13:58.121794 | orchestrator | TASK [grafana : Creating grafana database user and setting permissions] ******** 2026-02-28 01:13:58.121805 | orchestrator | Saturday 28 February 2026 01:12:47 +0000 (0:00:02.853) 0:00:30.901 ***** 2026-02-28 01:13:58.121815 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:13:58.121826 | orchestrator | 2026-02-28 01:13:58.121837 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-02-28 01:13:58.121848 | orchestrator | Saturday 28 February 2026 01:12:50 +0000 (0:00:02.531) 0:00:33.433 ***** 2026-02-28 01:13:58.121859 | orchestrator | 2026-02-28 01:13:58.121870 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-02-28 01:13:58.121881 | orchestrator | Saturday 28 February 2026 01:12:50 +0000 (0:00:00.087) 0:00:33.520 ***** 2026-02-28 01:13:58.121892 | orchestrator | 2026-02-28 01:13:58.121903 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-02-28 01:13:58.121921 | orchestrator | Saturday 28 February 2026 01:12:50 +0000 (0:00:00.091) 0:00:33.611 ***** 2026-02-28 01:13:58.121933 | orchestrator | 2026-02-28 01:13:58.121951 | orchestrator | RUNNING HANDLER [grafana : Restart first grafana container] ******************** 2026-02-28 01:13:58.121962 | orchestrator | Saturday 28 February 2026 01:12:50 +0000 (0:00:00.081) 0:00:33.693 ***** 2026-02-28 01:13:58.121973 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:13:58.122078 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:13:58.122090 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:13:58.122101 | orchestrator | 2026-02-28 01:13:58.122112 | orchestrator | RUNNING HANDLER [grafana : Waiting for grafana to start on first node] ********* 2026-02-28 01:13:58.122124 | orchestrator | Saturday 28 February 2026 01:12:52 +0000 (0:00:02.033) 0:00:35.727 ***** 2026-02-28 01:13:58.122135 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:13:58.122146 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:13:58.122161 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (12 retries left). 2026-02-28 01:13:58.122179 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (11 retries left). 2026-02-28 01:13:58.122206 | orchestrator | ok: [testbed-node-0] 2026-02-28 01:13:58.122228 | orchestrator | 2026-02-28 01:13:58.122244 | orchestrator | RUNNING HANDLER [grafana : Restart remaining grafana containers] *************** 2026-02-28 01:13:58.122261 | orchestrator | Saturday 28 February 2026 01:13:20 +0000 (0:00:27.544) 0:01:03.271 ***** 2026-02-28 01:13:58.122277 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:13:58.122294 | orchestrator | changed: [testbed-node-1] 2026-02-28 01:13:58.122310 | orchestrator | changed: [testbed-node-2] 2026-02-28 01:13:58.122327 | orchestrator | 2026-02-28 01:13:58.122350 | orchestrator | TASK [grafana : Wait for grafana application ready] **************************** 2026-02-28 01:13:58.122367 | orchestrator | Saturday 28 February 2026 01:13:49 +0000 (0:00:29.338) 0:01:32.610 ***** 2026-02-28 01:13:58.122387 | orchestrator | ok: [testbed-node-0] 2026-02-28 01:13:58.122405 | orchestrator | 2026-02-28 01:13:58.122424 | orchestrator | TASK [grafana : Remove old grafana docker volume] ****************************** 2026-02-28 01:13:58.122441 | orchestrator | Saturday 28 February 2026 01:13:51 +0000 (0:00:02.474) 0:01:35.084 ***** 2026-02-28 01:13:58.122459 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:13:58.122478 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:13:58.122496 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:13:58.122515 | orchestrator | 2026-02-28 01:13:58.122670 | orchestrator | TASK [grafana : Enable grafana datasources] ************************************ 2026-02-28 01:13:58.122708 | orchestrator | Saturday 28 February 2026 01:13:52 +0000 (0:00:00.321) 0:01:35.406 ***** 2026-02-28 01:13:58.122721 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'influxdb', 'value': {'enabled': False, 'data': {'isDefault': True, 'database': 'telegraf', 'name': 'telegraf', 'type': 'influxdb', 'url': 'https://api-int.testbed.osism.xyz:8086', 'access': 'proxy', 'basicAuth': False}}})  2026-02-28 01:13:58.122750 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'data': {'name': 'opensearch', 'type': 'grafana-opensearch-datasource', 'access': 'proxy', 'url': 'https://api-int.testbed.osism.xyz:9200', 'jsonData': {'flavor': 'OpenSearch', 'database': 'flog-*', 'version': '2.11.1', 'timeField': '@timestamp', 'logLevelField': 'log_level'}}}}) 2026-02-28 01:13:58.122762 | orchestrator | 2026-02-28 01:13:58.122772 | orchestrator | TASK [grafana : Disable Getting Started panel] ********************************* 2026-02-28 01:13:58.122781 | orchestrator | Saturday 28 February 2026 01:13:54 +0000 (0:00:02.747) 0:01:38.154 ***** 2026-02-28 01:13:58.122791 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:13:58.122801 | orchestrator | 2026-02-28 01:13:58.122811 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-28 01:13:58.122822 | orchestrator | testbed-node-0 : ok=22  changed=13  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-28 01:13:58.122834 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-28 01:13:58.122854 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-28 01:13:58.122864 | orchestrator | 2026-02-28 01:13:58.122874 | orchestrator | 2026-02-28 01:13:58.122884 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-28 01:13:58.122893 | orchestrator | Saturday 28 February 2026 01:13:55 +0000 (0:00:00.364) 0:01:38.519 ***** 2026-02-28 01:13:58.122903 | orchestrator | =============================================================================== 2026-02-28 01:13:58.122913 | orchestrator | grafana : Restart remaining grafana containers ------------------------- 29.34s 2026-02-28 01:13:58.122922 | orchestrator | grafana : Waiting for grafana to start on first node ------------------- 27.54s 2026-02-28 01:13:58.122932 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS key ----- 3.08s 2026-02-28 01:13:58.122941 | orchestrator | grafana : Creating grafana database ------------------------------------- 2.85s 2026-02-28 01:13:58.122951 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 2.81s 2026-02-28 01:13:58.122961 | orchestrator | grafana : Enable grafana datasources ------------------------------------ 2.75s 2026-02-28 01:13:58.122970 | orchestrator | grafana : Creating grafana database user and setting permissions -------- 2.53s 2026-02-28 01:13:58.122980 | orchestrator | grafana : Wait for grafana application ready ---------------------------- 2.47s 2026-02-28 01:13:58.123002 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 2.16s 2026-02-28 01:13:58.123012 | orchestrator | grafana : Restart first grafana container ------------------------------- 2.03s 2026-02-28 01:13:58.123022 | orchestrator | grafana : Copying over config.json files -------------------------------- 1.80s 2026-02-28 01:13:58.123032 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 1.76s 2026-02-28 01:13:58.123043 | orchestrator | service-check-containers : Include tasks -------------------------------- 1.64s 2026-02-28 01:13:58.123055 | orchestrator | grafana : Copying over custom dashboards -------------------------------- 1.63s 2026-02-28 01:13:58.123066 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 1.60s 2026-02-28 01:13:58.123077 | orchestrator | grafana : Check if extra configuration file exists ---------------------- 1.31s 2026-02-28 01:13:58.123088 | orchestrator | grafana : Copying over extra configuration file ------------------------- 1.30s 2026-02-28 01:13:58.123099 | orchestrator | service-check-containers : grafana | Check containers ------------------- 1.22s 2026-02-28 01:13:58.123110 | orchestrator | grafana : Ensuring config directories exist ----------------------------- 1.20s 2026-02-28 01:13:58.123121 | orchestrator | grafana : include_tasks ------------------------------------------------- 1.18s 2026-02-28 01:13:58.123133 | orchestrator | 2026-02-28 01:13:58 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:13:58.123144 | orchestrator | 2026-02-28 01:13:58 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:14:01.165495 | orchestrator | 2026-02-28 01:14:01 | INFO  | Task fd7ad46f-1ac3-4fcb-b16b-dca0c60c95c3 is in state STARTED 2026-02-28 01:14:01.167077 | orchestrator | 2026-02-28 01:14:01 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:14:01.167153 | orchestrator | 2026-02-28 01:14:01 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:14:04.206945 | orchestrator | 2026-02-28 01:14:04 | INFO  | Task fd7ad46f-1ac3-4fcb-b16b-dca0c60c95c3 is in state STARTED 2026-02-28 01:14:04.207641 | orchestrator | 2026-02-28 01:14:04 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:14:04.207677 | orchestrator | 2026-02-28 01:14:04 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:14:07.242440 | orchestrator | 2026-02-28 01:14:07 | INFO  | Task fd7ad46f-1ac3-4fcb-b16b-dca0c60c95c3 is in state STARTED 2026-02-28 01:14:07.243582 | orchestrator | 2026-02-28 01:14:07 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:14:07.243606 | orchestrator | 2026-02-28 01:14:07 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:14:10.275019 | orchestrator | 2026-02-28 01:14:10 | INFO  | Task fd7ad46f-1ac3-4fcb-b16b-dca0c60c95c3 is in state STARTED 2026-02-28 01:14:10.276879 | orchestrator | 2026-02-28 01:14:10 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:14:10.276927 | orchestrator | 2026-02-28 01:14:10 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:14:13.311673 | orchestrator | 2026-02-28 01:14:13 | INFO  | Task fd7ad46f-1ac3-4fcb-b16b-dca0c60c95c3 is in state STARTED 2026-02-28 01:14:13.313494 | orchestrator | 2026-02-28 01:14:13 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:14:13.313564 | orchestrator | 2026-02-28 01:14:13 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:14:16.357298 | orchestrator | 2026-02-28 01:14:16 | INFO  | Task fd7ad46f-1ac3-4fcb-b16b-dca0c60c95c3 is in state STARTED 2026-02-28 01:14:16.359926 | orchestrator | 2026-02-28 01:14:16 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:14:16.359985 | orchestrator | 2026-02-28 01:14:16 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:14:19.412395 | orchestrator | 2026-02-28 01:14:19 | INFO  | Task fd7ad46f-1ac3-4fcb-b16b-dca0c60c95c3 is in state STARTED 2026-02-28 01:14:19.413263 | orchestrator | 2026-02-28 01:14:19 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:14:19.413443 | orchestrator | 2026-02-28 01:14:19 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:14:22.461783 | orchestrator | 2026-02-28 01:14:22 | INFO  | Task fd7ad46f-1ac3-4fcb-b16b-dca0c60c95c3 is in state STARTED 2026-02-28 01:14:22.461896 | orchestrator | 2026-02-28 01:14:22 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:14:22.461929 | orchestrator | 2026-02-28 01:14:22 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:14:25.504700 | orchestrator | 2026-02-28 01:14:25 | INFO  | Task fd7ad46f-1ac3-4fcb-b16b-dca0c60c95c3 is in state STARTED 2026-02-28 01:14:25.507034 | orchestrator | 2026-02-28 01:14:25 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:14:25.507128 | orchestrator | 2026-02-28 01:14:25 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:14:28.550382 | orchestrator | 2026-02-28 01:14:28 | INFO  | Task fd7ad46f-1ac3-4fcb-b16b-dca0c60c95c3 is in state STARTED 2026-02-28 01:14:28.551178 | orchestrator | 2026-02-28 01:14:28 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:14:28.551207 | orchestrator | 2026-02-28 01:14:28 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:14:31.591286 | orchestrator | 2026-02-28 01:14:31 | INFO  | Task fd7ad46f-1ac3-4fcb-b16b-dca0c60c95c3 is in state STARTED 2026-02-28 01:14:31.592864 | orchestrator | 2026-02-28 01:14:31 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:14:31.592914 | orchestrator | 2026-02-28 01:14:31 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:14:34.627578 | orchestrator | 2026-02-28 01:14:34 | INFO  | Task fd7ad46f-1ac3-4fcb-b16b-dca0c60c95c3 is in state STARTED 2026-02-28 01:14:34.628042 | orchestrator | 2026-02-28 01:14:34 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:14:34.628059 | orchestrator | 2026-02-28 01:14:34 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:14:37.666313 | orchestrator | 2026-02-28 01:14:37 | INFO  | Task fd7ad46f-1ac3-4fcb-b16b-dca0c60c95c3 is in state STARTED 2026-02-28 01:14:37.667730 | orchestrator | 2026-02-28 01:14:37 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:14:37.667776 | orchestrator | 2026-02-28 01:14:37 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:14:40.701611 | orchestrator | 2026-02-28 01:14:40 | INFO  | Task fd7ad46f-1ac3-4fcb-b16b-dca0c60c95c3 is in state STARTED 2026-02-28 01:14:40.702180 | orchestrator | 2026-02-28 01:14:40 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:14:40.702202 | orchestrator | 2026-02-28 01:14:40 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:14:43.737044 | orchestrator | 2026-02-28 01:14:43 | INFO  | Task fd7ad46f-1ac3-4fcb-b16b-dca0c60c95c3 is in state STARTED 2026-02-28 01:14:43.740604 | orchestrator | 2026-02-28 01:14:43 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:14:43.740695 | orchestrator | 2026-02-28 01:14:43 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:14:46.776785 | orchestrator | 2026-02-28 01:14:46 | INFO  | Task fd7ad46f-1ac3-4fcb-b16b-dca0c60c95c3 is in state STARTED 2026-02-28 01:14:46.777160 | orchestrator | 2026-02-28 01:14:46 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:14:46.777187 | orchestrator | 2026-02-28 01:14:46 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:14:49.817335 | orchestrator | 2026-02-28 01:14:49 | INFO  | Task fd7ad46f-1ac3-4fcb-b16b-dca0c60c95c3 is in state STARTED 2026-02-28 01:14:49.817927 | orchestrator | 2026-02-28 01:14:49 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:14:49.817969 | orchestrator | 2026-02-28 01:14:49 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:14:52.860207 | orchestrator | 2026-02-28 01:14:52 | INFO  | Task fd7ad46f-1ac3-4fcb-b16b-dca0c60c95c3 is in state STARTED 2026-02-28 01:14:52.864204 | orchestrator | 2026-02-28 01:14:52 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:14:52.864323 | orchestrator | 2026-02-28 01:14:52 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:14:55.905944 | orchestrator | 2026-02-28 01:14:55 | INFO  | Task fd7ad46f-1ac3-4fcb-b16b-dca0c60c95c3 is in state STARTED 2026-02-28 01:14:55.908308 | orchestrator | 2026-02-28 01:14:55 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:14:55.908372 | orchestrator | 2026-02-28 01:14:55 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:14:58.943852 | orchestrator | 2026-02-28 01:14:58 | INFO  | Task fd7ad46f-1ac3-4fcb-b16b-dca0c60c95c3 is in state STARTED 2026-02-28 01:14:58.946238 | orchestrator | 2026-02-28 01:14:58 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:14:58.946320 | orchestrator | 2026-02-28 01:14:58 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:15:01.988905 | orchestrator | 2026-02-28 01:15:01 | INFO  | Task fd7ad46f-1ac3-4fcb-b16b-dca0c60c95c3 is in state STARTED 2026-02-28 01:15:01.990075 | orchestrator | 2026-02-28 01:15:01 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:15:01.990108 | orchestrator | 2026-02-28 01:15:01 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:15:05.042781 | orchestrator | 2026-02-28 01:15:05 | INFO  | Task fd7ad46f-1ac3-4fcb-b16b-dca0c60c95c3 is in state STARTED 2026-02-28 01:15:05.044626 | orchestrator | 2026-02-28 01:15:05 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:15:05.044700 | orchestrator | 2026-02-28 01:15:05 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:15:08.089378 | orchestrator | 2026-02-28 01:15:08 | INFO  | Task fd7ad46f-1ac3-4fcb-b16b-dca0c60c95c3 is in state STARTED 2026-02-28 01:15:08.094438 | orchestrator | 2026-02-28 01:15:08 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:15:08.094573 | orchestrator | 2026-02-28 01:15:08 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:15:11.133211 | orchestrator | 2026-02-28 01:15:11 | INFO  | Task fd7ad46f-1ac3-4fcb-b16b-dca0c60c95c3 is in state STARTED 2026-02-28 01:15:11.133317 | orchestrator | 2026-02-28 01:15:11 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:15:11.133334 | orchestrator | 2026-02-28 01:15:11 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:15:14.238232 | orchestrator | 2026-02-28 01:15:14 | INFO  | Task fd7ad46f-1ac3-4fcb-b16b-dca0c60c95c3 is in state STARTED 2026-02-28 01:15:14.240078 | orchestrator | 2026-02-28 01:15:14 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:15:14.240150 | orchestrator | 2026-02-28 01:15:14 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:15:17.271295 | orchestrator | 2026-02-28 01:15:17 | INFO  | Task fd7ad46f-1ac3-4fcb-b16b-dca0c60c95c3 is in state STARTED 2026-02-28 01:15:17.272060 | orchestrator | 2026-02-28 01:15:17 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:15:17.272130 | orchestrator | 2026-02-28 01:15:17 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:15:20.295570 | orchestrator | 2026-02-28 01:15:20 | INFO  | Task fd7ad46f-1ac3-4fcb-b16b-dca0c60c95c3 is in state STARTED 2026-02-28 01:15:20.296877 | orchestrator | 2026-02-28 01:15:20 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:15:20.297609 | orchestrator | 2026-02-28 01:15:20 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:15:23.325712 | orchestrator | 2026-02-28 01:15:23 | INFO  | Task fd7ad46f-1ac3-4fcb-b16b-dca0c60c95c3 is in state STARTED 2026-02-28 01:15:23.326200 | orchestrator | 2026-02-28 01:15:23 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:15:23.326234 | orchestrator | 2026-02-28 01:15:23 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:15:26.369426 | orchestrator | 2026-02-28 01:15:26 | INFO  | Task fd7ad46f-1ac3-4fcb-b16b-dca0c60c95c3 is in state STARTED 2026-02-28 01:15:26.371098 | orchestrator | 2026-02-28 01:15:26 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:15:26.371652 | orchestrator | 2026-02-28 01:15:26 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:15:29.412128 | orchestrator | 2026-02-28 01:15:29 | INFO  | Task fd7ad46f-1ac3-4fcb-b16b-dca0c60c95c3 is in state STARTED 2026-02-28 01:15:29.412833 | orchestrator | 2026-02-28 01:15:29 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:15:29.412862 | orchestrator | 2026-02-28 01:15:29 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:15:32.448929 | orchestrator | 2026-02-28 01:15:32 | INFO  | Task fd7ad46f-1ac3-4fcb-b16b-dca0c60c95c3 is in state STARTED 2026-02-28 01:15:32.449120 | orchestrator | 2026-02-28 01:15:32 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:15:32.449141 | orchestrator | 2026-02-28 01:15:32 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:15:35.485278 | orchestrator | 2026-02-28 01:15:35 | INFO  | Task fd7ad46f-1ac3-4fcb-b16b-dca0c60c95c3 is in state STARTED 2026-02-28 01:15:35.486215 | orchestrator | 2026-02-28 01:15:35 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:15:35.486292 | orchestrator | 2026-02-28 01:15:35 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:15:38.536681 | orchestrator | 2026-02-28 01:15:38 | INFO  | Task fd7ad46f-1ac3-4fcb-b16b-dca0c60c95c3 is in state STARTED 2026-02-28 01:15:38.537800 | orchestrator | 2026-02-28 01:15:38 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:15:38.537850 | orchestrator | 2026-02-28 01:15:38 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:15:41.578343 | orchestrator | 2026-02-28 01:15:41 | INFO  | Task fd7ad46f-1ac3-4fcb-b16b-dca0c60c95c3 is in state STARTED 2026-02-28 01:15:41.578647 | orchestrator | 2026-02-28 01:15:41 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:15:41.578685 | orchestrator | 2026-02-28 01:15:41 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:15:44.616765 | orchestrator | 2026-02-28 01:15:44 | INFO  | Task fd7ad46f-1ac3-4fcb-b16b-dca0c60c95c3 is in state STARTED 2026-02-28 01:15:44.617731 | orchestrator | 2026-02-28 01:15:44 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:15:44.617764 | orchestrator | 2026-02-28 01:15:44 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:15:47.656436 | orchestrator | 2026-02-28 01:15:47 | INFO  | Task fd7ad46f-1ac3-4fcb-b16b-dca0c60c95c3 is in state STARTED 2026-02-28 01:15:47.657242 | orchestrator | 2026-02-28 01:15:47 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:15:47.657300 | orchestrator | 2026-02-28 01:15:47 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:15:50.701883 | orchestrator | 2026-02-28 01:15:50 | INFO  | Task fd7ad46f-1ac3-4fcb-b16b-dca0c60c95c3 is in state STARTED 2026-02-28 01:15:50.704839 | orchestrator | 2026-02-28 01:15:50 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:15:50.704899 | orchestrator | 2026-02-28 01:15:50 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:15:53.756412 | orchestrator | 2026-02-28 01:15:53 | INFO  | Task fd7ad46f-1ac3-4fcb-b16b-dca0c60c95c3 is in state STARTED 2026-02-28 01:15:53.758166 | orchestrator | 2026-02-28 01:15:53 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:15:53.759797 | orchestrator | 2026-02-28 01:15:53 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:15:56.817176 | orchestrator | 2026-02-28 01:15:56 | INFO  | Task fd7ad46f-1ac3-4fcb-b16b-dca0c60c95c3 is in state STARTED 2026-02-28 01:15:56.817294 | orchestrator | 2026-02-28 01:15:56 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:15:56.817318 | orchestrator | 2026-02-28 01:15:56 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:15:59.857182 | orchestrator | 2026-02-28 01:15:59 | INFO  | Task fd7ad46f-1ac3-4fcb-b16b-dca0c60c95c3 is in state STARTED 2026-02-28 01:15:59.859800 | orchestrator | 2026-02-28 01:15:59 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:15:59.860063 | orchestrator | 2026-02-28 01:15:59 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:16:02.905059 | orchestrator | 2026-02-28 01:16:02 | INFO  | Task fd7ad46f-1ac3-4fcb-b16b-dca0c60c95c3 is in state STARTED 2026-02-28 01:16:02.905828 | orchestrator | 2026-02-28 01:16:02 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state STARTED 2026-02-28 01:16:02.905868 | orchestrator | 2026-02-28 01:16:02 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:16:05.953315 | orchestrator | 2026-02-28 01:16:05 | INFO  | Task fd7ad46f-1ac3-4fcb-b16b-dca0c60c95c3 is in state STARTED 2026-02-28 01:16:05.957023 | orchestrator | 2026-02-28 01:16:05 | INFO  | Task 4a59eb84-2d9d-4591-ab56-a47aae2bb50d is in state SUCCESS 2026-02-28 01:16:05.958801 | orchestrator | 2026-02-28 01:16:05.958852 | orchestrator | 2026-02-28 01:16:05.958866 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-28 01:16:05.958878 | orchestrator | 2026-02-28 01:16:05.958888 | orchestrator | TASK [Group hosts based on OpenStack release] ********************************** 2026-02-28 01:16:05.958900 | orchestrator | Saturday 28 February 2026 01:04:19 +0000 (0:00:00.418) 0:00:00.418 ***** 2026-02-28 01:16:05.958910 | orchestrator | changed: [testbed-manager] 2026-02-28 01:16:05.958922 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:16:05.958933 | orchestrator | changed: [testbed-node-1] 2026-02-28 01:16:05.958945 | orchestrator | changed: [testbed-node-2] 2026-02-28 01:16:05.958957 | orchestrator | changed: [testbed-node-3] 2026-02-28 01:16:05.958969 | orchestrator | changed: [testbed-node-4] 2026-02-28 01:16:05.958980 | orchestrator | changed: [testbed-node-5] 2026-02-28 01:16:05.958992 | orchestrator | 2026-02-28 01:16:05.959004 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-28 01:16:05.959016 | orchestrator | Saturday 28 February 2026 01:04:21 +0000 (0:00:01.906) 0:00:02.325 ***** 2026-02-28 01:16:05.959072 | orchestrator | changed: [testbed-manager] 2026-02-28 01:16:05.959085 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:16:05.959130 | orchestrator | changed: [testbed-node-1] 2026-02-28 01:16:05.959143 | orchestrator | changed: [testbed-node-2] 2026-02-28 01:16:05.959154 | orchestrator | changed: [testbed-node-3] 2026-02-28 01:16:05.959166 | orchestrator | changed: [testbed-node-4] 2026-02-28 01:16:05.959178 | orchestrator | changed: [testbed-node-5] 2026-02-28 01:16:05.959190 | orchestrator | 2026-02-28 01:16:05.959203 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-28 01:16:05.959215 | orchestrator | Saturday 28 February 2026 01:04:23 +0000 (0:00:01.630) 0:00:03.956 ***** 2026-02-28 01:16:05.959228 | orchestrator | changed: [testbed-manager] => (item=enable_nova_True) 2026-02-28 01:16:05.959241 | orchestrator | changed: [testbed-node-0] => (item=enable_nova_True) 2026-02-28 01:16:05.959253 | orchestrator | changed: [testbed-node-1] => (item=enable_nova_True) 2026-02-28 01:16:05.959265 | orchestrator | changed: [testbed-node-2] => (item=enable_nova_True) 2026-02-28 01:16:05.959277 | orchestrator | changed: [testbed-node-3] => (item=enable_nova_True) 2026-02-28 01:16:05.959289 | orchestrator | changed: [testbed-node-4] => (item=enable_nova_True) 2026-02-28 01:16:05.959302 | orchestrator | changed: [testbed-node-5] => (item=enable_nova_True) 2026-02-28 01:16:05.959340 | orchestrator | 2026-02-28 01:16:05.959352 | orchestrator | PLAY [Bootstrap nova API databases] ******************************************** 2026-02-28 01:16:05.959365 | orchestrator | 2026-02-28 01:16:05.959614 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-02-28 01:16:05.959632 | orchestrator | Saturday 28 February 2026 01:04:25 +0000 (0:00:01.910) 0:00:05.866 ***** 2026-02-28 01:16:05.959644 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 01:16:05.959656 | orchestrator | 2026-02-28 01:16:05.959667 | orchestrator | TASK [nova : Creating Nova databases] ****************************************** 2026-02-28 01:16:05.959678 | orchestrator | Saturday 28 February 2026 01:04:26 +0000 (0:00:00.969) 0:00:06.836 ***** 2026-02-28 01:16:05.959692 | orchestrator | changed: [testbed-node-0] => (item=nova_cell0) 2026-02-28 01:16:05.959705 | orchestrator | changed: [testbed-node-0] => (item=nova_api) 2026-02-28 01:16:05.959718 | orchestrator | 2026-02-28 01:16:05.959729 | orchestrator | TASK [nova : Creating Nova databases user and setting permissions] ************* 2026-02-28 01:16:05.959740 | orchestrator | Saturday 28 February 2026 01:04:30 +0000 (0:00:04.458) 0:00:11.294 ***** 2026-02-28 01:16:05.959752 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-28 01:16:05.959763 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-28 01:16:05.959800 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:16:05.959812 | orchestrator | 2026-02-28 01:16:05.959823 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-02-28 01:16:05.959833 | orchestrator | Saturday 28 February 2026 01:04:35 +0000 (0:00:04.859) 0:00:16.154 ***** 2026-02-28 01:16:05.959843 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:16:05.959854 | orchestrator | 2026-02-28 01:16:05.959864 | orchestrator | TASK [nova : Copying over config.json files for nova-api-bootstrap] ************ 2026-02-28 01:16:05.959874 | orchestrator | Saturday 28 February 2026 01:04:36 +0000 (0:00:00.702) 0:00:16.856 ***** 2026-02-28 01:16:05.959884 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:16:05.959893 | orchestrator | 2026-02-28 01:16:05.959903 | orchestrator | TASK [nova : Copying over nova.conf for nova-api-bootstrap] ******************** 2026-02-28 01:16:05.959914 | orchestrator | Saturday 28 February 2026 01:04:38 +0000 (0:00:01.962) 0:00:18.818 ***** 2026-02-28 01:16:05.959925 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:16:05.959936 | orchestrator | 2026-02-28 01:16:05.959947 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-02-28 01:16:05.959960 | orchestrator | Saturday 28 February 2026 01:04:42 +0000 (0:00:04.545) 0:00:23.364 ***** 2026-02-28 01:16:05.959971 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:16:05.959982 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:16:05.959992 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:16:05.960002 | orchestrator | 2026-02-28 01:16:05.960013 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2026-02-28 01:16:05.960038 | orchestrator | Saturday 28 February 2026 01:04:43 +0000 (0:00:00.389) 0:00:23.754 ***** 2026-02-28 01:16:05.960050 | orchestrator | ok: [testbed-node-0] 2026-02-28 01:16:05.960060 | orchestrator | 2026-02-28 01:16:05.960071 | orchestrator | TASK [nova : Create cell0 mappings] ******************************************** 2026-02-28 01:16:05.960081 | orchestrator | Saturday 28 February 2026 01:05:15 +0000 (0:00:32.045) 0:00:55.799 ***** 2026-02-28 01:16:05.960092 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:16:05.960101 | orchestrator | 2026-02-28 01:16:05.960150 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-02-28 01:16:05.960161 | orchestrator | Saturday 28 February 2026 01:05:31 +0000 (0:00:16.093) 0:01:11.893 ***** 2026-02-28 01:16:05.960171 | orchestrator | ok: [testbed-node-0] 2026-02-28 01:16:05.960191 | orchestrator | 2026-02-28 01:16:05.960202 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-02-28 01:16:05.960212 | orchestrator | Saturday 28 February 2026 01:05:43 +0000 (0:00:12.600) 0:01:24.493 ***** 2026-02-28 01:16:05.960248 | orchestrator | ok: [testbed-node-0] 2026-02-28 01:16:05.960260 | orchestrator | 2026-02-28 01:16:05.960269 | orchestrator | TASK [nova : Update cell0 mappings] ******************************************** 2026-02-28 01:16:05.960279 | orchestrator | Saturday 28 February 2026 01:05:45 +0000 (0:00:01.560) 0:01:26.054 ***** 2026-02-28 01:16:05.960289 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:16:05.960299 | orchestrator | 2026-02-28 01:16:05.960309 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-02-28 01:16:05.960318 | orchestrator | Saturday 28 February 2026 01:05:46 +0000 (0:00:01.199) 0:01:27.253 ***** 2026-02-28 01:16:05.960328 | orchestrator | included: /ansible/roles/nova/tasks/bootstrap_service.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 01:16:05.960339 | orchestrator | 2026-02-28 01:16:05.960348 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2026-02-28 01:16:05.960358 | orchestrator | Saturday 28 February 2026 01:05:47 +0000 (0:00:00.568) 0:01:27.822 ***** 2026-02-28 01:16:05.960369 | orchestrator | ok: [testbed-node-0] 2026-02-28 01:16:05.960380 | orchestrator | 2026-02-28 01:16:05.960390 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-02-28 01:16:05.960401 | orchestrator | Saturday 28 February 2026 01:06:06 +0000 (0:00:18.960) 0:01:46.782 ***** 2026-02-28 01:16:05.960411 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:16:05.960421 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:16:05.960466 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:16:05.960478 | orchestrator | 2026-02-28 01:16:05.960488 | orchestrator | PLAY [Bootstrap nova cell databases] ******************************************* 2026-02-28 01:16:05.960498 | orchestrator | 2026-02-28 01:16:05.960508 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-02-28 01:16:05.960519 | orchestrator | Saturday 28 February 2026 01:06:06 +0000 (0:00:00.388) 0:01:47.170 ***** 2026-02-28 01:16:05.960529 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 01:16:05.960538 | orchestrator | 2026-02-28 01:16:05.960544 | orchestrator | TASK [nova-cell : Creating Nova cell database] ********************************* 2026-02-28 01:16:05.960551 | orchestrator | Saturday 28 February 2026 01:06:07 +0000 (0:00:00.622) 0:01:47.793 ***** 2026-02-28 01:16:05.960557 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:16:05.960564 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:16:05.960570 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:16:05.960647 | orchestrator | 2026-02-28 01:16:05.960655 | orchestrator | TASK [nova-cell : Creating Nova cell database user and setting permissions] **** 2026-02-28 01:16:05.960661 | orchestrator | Saturday 28 February 2026 01:06:09 +0000 (0:00:02.344) 0:01:50.138 ***** 2026-02-28 01:16:05.960667 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:16:05.960674 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:16:05.960680 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:16:05.960686 | orchestrator | 2026-02-28 01:16:05.960693 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2026-02-28 01:16:05.960699 | orchestrator | Saturday 28 February 2026 01:06:11 +0000 (0:00:02.363) 0:01:52.502 ***** 2026-02-28 01:16:05.960705 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:16:05.960711 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:16:05.960718 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:16:05.960724 | orchestrator | 2026-02-28 01:16:05.960730 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2026-02-28 01:16:05.960737 | orchestrator | Saturday 28 February 2026 01:06:12 +0000 (0:00:00.432) 0:01:52.934 ***** 2026-02-28 01:16:05.960743 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-02-28 01:16:05.960749 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:16:05.960756 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-02-28 01:16:05.960762 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:16:05.960768 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-02-28 01:16:05.960774 | orchestrator | ok: [testbed-node-0 -> {{ service_rabbitmq_delegate_host }}] 2026-02-28 01:16:05.960781 | orchestrator | 2026-02-28 01:16:05.960787 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2026-02-28 01:16:05.960793 | orchestrator | Saturday 28 February 2026 01:06:26 +0000 (0:00:14.007) 0:02:06.942 ***** 2026-02-28 01:16:05.960800 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:16:05.960806 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:16:05.960812 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:16:05.960818 | orchestrator | 2026-02-28 01:16:05.960824 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2026-02-28 01:16:05.960831 | orchestrator | Saturday 28 February 2026 01:06:27 +0000 (0:00:00.587) 0:02:07.530 ***** 2026-02-28 01:16:05.960837 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-02-28 01:16:05.960843 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:16:05.960849 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-02-28 01:16:05.960855 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:16:05.960862 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-02-28 01:16:05.960868 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:16:05.960874 | orchestrator | 2026-02-28 01:16:05.960880 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-02-28 01:16:05.960893 | orchestrator | Saturday 28 February 2026 01:06:28 +0000 (0:00:01.795) 0:02:09.325 ***** 2026-02-28 01:16:05.960900 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:16:05.960912 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:16:05.960918 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:16:05.960925 | orchestrator | 2026-02-28 01:16:05.960931 | orchestrator | TASK [nova-cell : Copying over config.json files for nova-cell-bootstrap] ****** 2026-02-28 01:16:05.960937 | orchestrator | Saturday 28 February 2026 01:06:29 +0000 (0:00:00.795) 0:02:10.120 ***** 2026-02-28 01:16:05.960943 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:16:05.960950 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:16:05.960956 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:16:05.960962 | orchestrator | 2026-02-28 01:16:05.960968 | orchestrator | TASK [nova-cell : Copying over nova.conf for nova-cell-bootstrap] ************** 2026-02-28 01:16:05.960975 | orchestrator | Saturday 28 February 2026 01:06:30 +0000 (0:00:01.158) 0:02:11.279 ***** 2026-02-28 01:16:05.960981 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:16:05.960987 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:16:05.961003 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:16:05.961009 | orchestrator | 2026-02-28 01:16:05.961016 | orchestrator | TASK [nova-cell : Running Nova cell bootstrap container] *********************** 2026-02-28 01:16:05.961022 | orchestrator | Saturday 28 February 2026 01:06:33 +0000 (0:00:02.953) 0:02:14.232 ***** 2026-02-28 01:16:05.961028 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:16:05.961034 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:16:05.961041 | orchestrator | ok: [testbed-node-0] 2026-02-28 01:16:05.961047 | orchestrator | 2026-02-28 01:16:05.961053 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-02-28 01:16:05.961059 | orchestrator | Saturday 28 February 2026 01:06:57 +0000 (0:00:23.693) 0:02:37.926 ***** 2026-02-28 01:16:05.961066 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:16:05.961072 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:16:05.961078 | orchestrator | ok: [testbed-node-0] 2026-02-28 01:16:05.961084 | orchestrator | 2026-02-28 01:16:05.961091 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-02-28 01:16:05.961097 | orchestrator | Saturday 28 February 2026 01:07:11 +0000 (0:00:14.147) 0:02:52.073 ***** 2026-02-28 01:16:05.961103 | orchestrator | ok: [testbed-node-0] 2026-02-28 01:16:05.961109 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:16:05.961116 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:16:05.961122 | orchestrator | 2026-02-28 01:16:05.961128 | orchestrator | TASK [nova-cell : Create cell] ************************************************* 2026-02-28 01:16:05.961134 | orchestrator | Saturday 28 February 2026 01:07:12 +0000 (0:00:00.992) 0:02:53.066 ***** 2026-02-28 01:16:05.961140 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:16:05.961146 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:16:05.961153 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:16:05.961159 | orchestrator | 2026-02-28 01:16:05.961165 | orchestrator | TASK [nova-cell : Update cell] ************************************************* 2026-02-28 01:16:05.961171 | orchestrator | Saturday 28 February 2026 01:07:25 +0000 (0:00:13.176) 0:03:06.242 ***** 2026-02-28 01:16:05.961177 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:16:05.961184 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:16:05.961190 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:16:05.961196 | orchestrator | 2026-02-28 01:16:05.961202 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-02-28 01:16:05.961209 | orchestrator | Saturday 28 February 2026 01:07:28 +0000 (0:00:02.643) 0:03:08.886 ***** 2026-02-28 01:16:05.961215 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:16:05.961221 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:16:05.961227 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:16:05.961233 | orchestrator | 2026-02-28 01:16:05.961240 | orchestrator | PLAY [Apply role nova] ********************************************************* 2026-02-28 01:16:05.961246 | orchestrator | 2026-02-28 01:16:05.961252 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-02-28 01:16:05.961258 | orchestrator | Saturday 28 February 2026 01:07:29 +0000 (0:00:00.737) 0:03:09.623 ***** 2026-02-28 01:16:05.961269 | orchestrator | included: /ansible/roles/nova/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 01:16:05.961277 | orchestrator | 2026-02-28 01:16:05.961283 | orchestrator | TASK [service-ks-register : nova | Creating/deleting services] ***************** 2026-02-28 01:16:05.961289 | orchestrator | Saturday 28 February 2026 01:07:29 +0000 (0:00:00.836) 0:03:10.460 ***** 2026-02-28 01:16:05.961296 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy (compute_legacy))  2026-02-28 01:16:05.961302 | orchestrator | changed: [testbed-node-0] => (item=nova (compute)) 2026-02-28 01:16:05.961308 | orchestrator | 2026-02-28 01:16:05.961314 | orchestrator | TASK [service-ks-register : nova | Creating/deleting endpoints] **************** 2026-02-28 01:16:05.961321 | orchestrator | Saturday 28 February 2026 01:07:33 +0000 (0:00:03.102) 0:03:13.563 ***** 2026-02-28 01:16:05.961327 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api-int.testbed.osism.xyz:8774/v2/%(tenant_id)s -> internal)  2026-02-28 01:16:05.961336 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api.testbed.osism.xyz:8774/v2/%(tenant_id)s -> public)  2026-02-28 01:16:05.961342 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api-int.testbed.osism.xyz:8774/v2.1 -> internal) 2026-02-28 01:16:05.961349 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api.testbed.osism.xyz:8774/v2.1 -> public) 2026-02-28 01:16:05.961355 | orchestrator | 2026-02-28 01:16:05.961361 | orchestrator | TASK [service-ks-register : nova | Creating projects] ************************** 2026-02-28 01:16:05.961368 | orchestrator | Saturday 28 February 2026 01:07:40 +0000 (0:00:07.201) 0:03:20.764 ***** 2026-02-28 01:16:05.961374 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-28 01:16:05.961380 | orchestrator | 2026-02-28 01:16:05.961387 | orchestrator | TASK [service-ks-register : nova | Creating users] ***************************** 2026-02-28 01:16:05.961393 | orchestrator | Saturday 28 February 2026 01:07:43 +0000 (0:00:03.555) 0:03:24.320 ***** 2026-02-28 01:16:05.961399 | orchestrator | changed: [testbed-node-0] => (item=nova -> service) 2026-02-28 01:16:05.961409 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-28 01:16:05.961415 | orchestrator | 2026-02-28 01:16:05.961422 | orchestrator | TASK [service-ks-register : nova | Creating roles] ***************************** 2026-02-28 01:16:05.961428 | orchestrator | Saturday 28 February 2026 01:07:48 +0000 (0:00:04.265) 0:03:28.585 ***** 2026-02-28 01:16:05.961434 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-28 01:16:05.961462 | orchestrator | 2026-02-28 01:16:05.961472 | orchestrator | TASK [service-ks-register : nova | Granting/revoking user roles] *************** 2026-02-28 01:16:05.961483 | orchestrator | Saturday 28 February 2026 01:07:51 +0000 (0:00:03.228) 0:03:31.813 ***** 2026-02-28 01:16:05.961493 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> admin) 2026-02-28 01:16:05.961503 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> service) 2026-02-28 01:16:05.961513 | orchestrator | 2026-02-28 01:16:05.961523 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-02-28 01:16:05.961534 | orchestrator | Saturday 28 February 2026 01:07:59 +0000 (0:00:08.034) 0:03:39.847 ***** 2026-02-28 01:16:05.961546 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-28 01:16:05.961563 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-28 01:16:05.961571 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-28 01:16:05.961589 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-28 01:16:05.961597 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-28 01:16:05.961615 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-28 01:16:05.961623 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-28 01:16:05.961632 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-28 01:16:05.961642 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-28 01:16:05.961648 | orchestrator | 2026-02-28 01:16:05.961659 | orchestrator | TASK [nova : Check if policies shall be overwritten] *************************** 2026-02-28 01:16:05.961670 | orchestrator | Saturday 28 February 2026 01:08:02 +0000 (0:00:02.918) 0:03:42.770 ***** 2026-02-28 01:16:05.961679 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:16:05.961689 | orchestrator | 2026-02-28 01:16:05.961699 | orchestrator | TASK [nova : Set nova policy file] ********************************************* 2026-02-28 01:16:05.961709 | orchestrator | Saturday 28 February 2026 01:08:02 +0000 (0:00:00.310) 0:03:43.080 ***** 2026-02-28 01:16:05.961719 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:16:05.961730 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:16:05.961747 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:16:05.961758 | orchestrator | 2026-02-28 01:16:05.961768 | orchestrator | TASK [nova : Check for vendordata file] **************************************** 2026-02-28 01:16:05.961777 | orchestrator | Saturday 28 February 2026 01:08:03 +0000 (0:00:01.338) 0:03:44.419 ***** 2026-02-28 01:16:05.961788 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-28 01:16:05.961797 | orchestrator | 2026-02-28 01:16:05.961807 | orchestrator | TASK [nova : Set vendordata file path] ***************************************** 2026-02-28 01:16:05.961817 | orchestrator | Saturday 28 February 2026 01:08:05 +0000 (0:00:01.267) 0:03:45.687 ***** 2026-02-28 01:16:05.961827 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:16:05.961837 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:16:05.961848 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:16:05.961858 | orchestrator | 2026-02-28 01:16:05.961869 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-02-28 01:16:05.961880 | orchestrator | Saturday 28 February 2026 01:08:05 +0000 (0:00:00.361) 0:03:46.048 ***** 2026-02-28 01:16:05.961890 | orchestrator | included: /ansible/roles/nova/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 01:16:05.961900 | orchestrator | 2026-02-28 01:16:05.961915 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-02-28 01:16:05.961928 | orchestrator | Saturday 28 February 2026 01:08:06 +0000 (0:00:00.599) 0:03:46.648 ***** 2026-02-28 01:16:05.961939 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-28 01:16:05.961957 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-28 01:16:05.961978 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-28 01:16:05.962001 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-28 01:16:05.962069 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-28 01:16:05.962095 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-28 01:16:05.962109 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-28 01:16:05.962122 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-28 01:16:05.962129 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-28 01:16:05.962136 | orchestrator | 2026-02-28 01:16:05.962142 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-02-28 01:16:05.962149 | orchestrator | Saturday 28 February 2026 01:08:11 +0000 (0:00:05.069) 0:03:51.717 ***** 2026-02-28 01:16:05.962156 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-28 01:16:05.962163 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-28 01:16:05.962174 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-28 01:16:05.962192 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:16:05.962207 | orchestrator | skippi2026-02-28 01:16:05 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:16:05.962910 | orchestrator | ng: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-28 01:16:05.962999 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-28 01:16:05.963019 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-28 01:16:05.963067 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-28 01:16:05.963107 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-28 01:16:05.963121 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:16:05.963133 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-28 01:16:05.963145 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:16:05.963157 | orchestrator | 2026-02-28 01:16:05.963170 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-02-28 01:16:05.963183 | orchestrator | Saturday 28 February 2026 01:08:13 +0000 (0:00:02.427) 0:03:54.144 ***** 2026-02-28 01:16:05.963196 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-28 01:16:05.963214 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-28 01:16:05.963244 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-28 01:16:05.963256 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:16:05.963269 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-28 01:16:05.963282 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-28 01:16:05.963295 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-28 01:16:05.963315 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:16:05.963332 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-28 01:16:05.963355 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-28 01:16:05.963369 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-28 01:16:05.963380 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:16:05.963392 | orchestrator | 2026-02-28 01:16:05.963404 | orchestrator | TASK [nova : Copying over config.json files for services] ********************** 2026-02-28 01:16:05.963415 | orchestrator | Saturday 28 February 2026 01:08:15 +0000 (0:00:01.523) 0:03:55.668 ***** 2026-02-28 01:16:05.963426 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-28 01:16:05.963523 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-28 01:16:05.963554 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-28 01:16:05.963569 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-28 01:16:05.963584 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-28 01:16:05.963612 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-28 01:16:05.963639 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-28 01:16:05.963653 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-28 01:16:05.963667 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-28 01:16:05.963680 | orchestrator | 2026-02-28 01:16:05.963694 | orchestrator | TASK [nova : Copying over nova.conf] ******************************************* 2026-02-28 01:16:05.963708 | orchestrator | Saturday 28 February 2026 01:08:21 +0000 (0:00:06.529) 0:04:02.197 ***** 2026-02-28 01:16:05.963724 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-28 01:16:05.963757 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-28 01:16:05.963783 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-28 01:16:05.963799 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-28 01:16:05.963813 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-28 01:16:05.963846 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-28 01:16:05.963870 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-28 01:16:05.963884 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-28 01:16:05.963901 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-28 01:16:05.963914 | orchestrator | 2026-02-28 01:16:05.963929 | orchestrator | TASK [nova : Copying over existing policy file] ******************************** 2026-02-28 01:16:05.963941 | orchestrator | Saturday 28 February 2026 01:08:37 +0000 (0:00:15.999) 0:04:18.197 ***** 2026-02-28 01:16:05.963953 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-28 01:16:05.963989 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-28 01:16:05.964014 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-28 01:16:05.964029 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-28 01:16:05.964042 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:16:05.964057 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-28 01:16:05.964081 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-28 01:16:05.964094 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:16:05.964137 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-28 01:16:05.964156 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-28 01:16:05.964171 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-28 01:16:05.964193 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:16:05.964206 | orchestrator | 2026-02-28 01:16:05.964219 | orchestrator | TASK [nova : Copying over nova-api-wsgi.conf] ********************************** 2026-02-28 01:16:05.964232 | orchestrator | Saturday 28 February 2026 01:08:38 +0000 (0:00:01.303) 0:04:19.500 ***** 2026-02-28 01:16:05.964246 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:16:05.964260 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:16:05.964272 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:16:05.964284 | orchestrator | 2026-02-28 01:16:05.964297 | orchestrator | TASK [nova : Copying over nova-metadata-wsgi.conf] ***************************** 2026-02-28 01:16:05.964308 | orchestrator | Saturday 28 February 2026 01:08:41 +0000 (0:00:02.217) 0:04:21.717 ***** 2026-02-28 01:16:05.964321 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:16:05.964333 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:16:05.964347 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:16:05.964358 | orchestrator | 2026-02-28 01:16:05.964370 | orchestrator | TASK [nova : Copying over vendordata file for nova services] ******************* 2026-02-28 01:16:05.964382 | orchestrator | Saturday 28 February 2026 01:08:43 +0000 (0:00:02.418) 0:04:24.136 ***** 2026-02-28 01:16:05.964395 | orchestrator | skipping: [testbed-node-0] => (item=nova-metadata)  2026-02-28 01:16:05.964410 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2026-02-28 01:16:05.964423 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:16:05.964435 | orchestrator | skipping: [testbed-node-1] => (item=nova-metadata)  2026-02-28 01:16:05.964471 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2026-02-28 01:16:05.964483 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:16:05.964496 | orchestrator | skipping: [testbed-node-2] => (item=nova-metadata)  2026-02-28 01:16:05.964508 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2026-02-28 01:16:05.964521 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:16:05.964532 | orchestrator | 2026-02-28 01:16:05.964545 | orchestrator | TASK [Configure uWSGI for Nova] ************************************************ 2026-02-28 01:16:05.964558 | orchestrator | Saturday 28 February 2026 01:08:45 +0000 (0:00:01.744) 0:04:25.880 ***** 2026-02-28 01:16:05.964570 | orchestrator | included: service-uwsgi-config for testbed-node-1, testbed-node-0, testbed-node-2 => (item={'name': 'nova-api', 'port': '8774', 'workers': '2'}) 2026-02-28 01:16:05.964593 | orchestrator | included: service-uwsgi-config for testbed-node-1, testbed-node-0, testbed-node-2 => (item={'name': 'nova-metadata', 'port': '8775', 'workers': '2'}) 2026-02-28 01:16:05.964606 | orchestrator | 2026-02-28 01:16:05.964618 | orchestrator | TASK [service-uwsgi-config : Copying over nova-api uWSGI config] *************** 2026-02-28 01:16:05.964629 | orchestrator | Saturday 28 February 2026 01:08:48 +0000 (0:00:02.762) 0:04:28.643 ***** 2026-02-28 01:16:05.964639 | orchestrator | changed: [testbed-node-1] 2026-02-28 01:16:05.964650 | orchestrator | changed: [testbed-node-2] 2026-02-28 01:16:05.964661 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:16:05.964673 | orchestrator | 2026-02-28 01:16:05.964684 | orchestrator | TASK [service-uwsgi-config : Copying over nova-metadata uWSGI config] ********** 2026-02-28 01:16:05.964696 | orchestrator | Saturday 28 February 2026 01:08:51 +0000 (0:00:03.644) 0:04:32.287 ***** 2026-02-28 01:16:05.964709 | orchestrator | changed: [testbed-node-1] 2026-02-28 01:16:05.964722 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:16:05.964734 | orchestrator | changed: [testbed-node-2] 2026-02-28 01:16:05.964746 | orchestrator | 2026-02-28 01:16:05.964758 | orchestrator | TASK [service-check-containers : nova | Check containers] ********************** 2026-02-28 01:16:05.964770 | orchestrator | Saturday 28 February 2026 01:08:55 +0000 (0:00:04.125) 0:04:36.413 ***** 2026-02-28 01:16:05.964796 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-28 01:16:05.964826 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-28 01:16:05.964840 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-28 01:16:05.964867 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-28 01:16:05.964881 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-28 01:16:05.964903 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-28 01:16:05.964916 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-28 01:16:05.964929 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-28 01:16:05.964950 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-28 01:16:05.964964 | orchestrator | 2026-02-28 01:16:05.964976 | orchestrator | TASK [service-check-containers : nova | Notify handlers to restart containers] *** 2026-02-28 01:16:05.964996 | orchestrator | Saturday 28 February 2026 01:08:59 +0000 (0:00:03.922) 0:04:40.335 ***** 2026-02-28 01:16:05.965015 | orchestrator | changed: [testbed-node-0] => { 2026-02-28 01:16:05.965029 | orchestrator |  "msg": "Notifying handlers" 2026-02-28 01:16:05.965041 | orchestrator | } 2026-02-28 01:16:05.965054 | orchestrator | changed: [testbed-node-1] => { 2026-02-28 01:16:05.965067 | orchestrator |  "msg": "Notifying handlers" 2026-02-28 01:16:05.965078 | orchestrator | } 2026-02-28 01:16:05.965090 | orchestrator | changed: [testbed-node-2] => { 2026-02-28 01:16:05.965103 | orchestrator |  "msg": "Notifying handlers" 2026-02-28 01:16:05.965115 | orchestrator | } 2026-02-28 01:16:05.965126 | orchestrator | 2026-02-28 01:16:05.965139 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-28 01:16:05.965150 | orchestrator | Saturday 28 February 2026 01:09:01 +0000 (0:00:01.520) 0:04:41.856 ***** 2026-02-28 01:16:05.965164 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-28 01:16:05.965178 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-28 01:16:05.965190 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-28 01:16:05.965204 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:16:05.965230 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-28 01:16:05.965252 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-28 01:16:05.965266 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-28 01:16:05.965278 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:16:05.965290 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-28 01:16:05.965308 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-28 01:16:05.965336 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-28 01:16:05.965348 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:16:05.965360 | orchestrator | 2026-02-28 01:16:05.965370 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-02-28 01:16:05.965381 | orchestrator | Saturday 28 February 2026 01:09:03 +0000 (0:00:02.276) 0:04:44.132 ***** 2026-02-28 01:16:05.965393 | orchestrator | 2026-02-28 01:16:05.965405 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-02-28 01:16:05.965416 | orchestrator | Saturday 28 February 2026 01:09:03 +0000 (0:00:00.150) 0:04:44.283 ***** 2026-02-28 01:16:05.965428 | orchestrator | 2026-02-28 01:16:05.965468 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-02-28 01:16:05.965481 | orchestrator | Saturday 28 February 2026 01:09:03 +0000 (0:00:00.160) 0:04:44.444 ***** 2026-02-28 01:16:05.965494 | orchestrator | 2026-02-28 01:16:05.965505 | orchestrator | RUNNING HANDLER [nova : Restart nova-scheduler container] ********************** 2026-02-28 01:16:05.965518 | orchestrator | Saturday 28 February 2026 01:09:04 +0000 (0:00:00.582) 0:04:45.026 ***** 2026-02-28 01:16:05.965530 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:16:05.965542 | orchestrator | changed: [testbed-node-1] 2026-02-28 01:16:05.965555 | orchestrator | changed: [testbed-node-2] 2026-02-28 01:16:05.965568 | orchestrator | 2026-02-28 01:16:05.965580 | orchestrator | RUNNING HANDLER [nova : Restart nova-api container] **************************** 2026-02-28 01:16:05.965592 | orchestrator | Saturday 28 February 2026 01:09:33 +0000 (0:00:28.637) 0:05:13.664 ***** 2026-02-28 01:16:05.965604 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:16:05.965617 | orchestrator | changed: [testbed-node-2] 2026-02-28 01:16:05.965629 | orchestrator | changed: [testbed-node-1] 2026-02-28 01:16:05.965642 | orchestrator | 2026-02-28 01:16:05.965654 | orchestrator | RUNNING HANDLER [nova : Restart nova-metadata container] *********************** 2026-02-28 01:16:05.965666 | orchestrator | Saturday 28 February 2026 01:09:45 +0000 (0:00:12.265) 0:05:25.929 ***** 2026-02-28 01:16:05.965678 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:16:05.965690 | orchestrator | changed: [testbed-node-2] 2026-02-28 01:16:05.965702 | orchestrator | changed: [testbed-node-1] 2026-02-28 01:16:05.965714 | orchestrator | 2026-02-28 01:16:05.965726 | orchestrator | PLAY [Apply role nova-cell] **************************************************** 2026-02-28 01:16:05.965739 | orchestrator | 2026-02-28 01:16:05.965751 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-02-28 01:16:05.965764 | orchestrator | Saturday 28 February 2026 01:09:55 +0000 (0:00:10.182) 0:05:36.112 ***** 2026-02-28 01:16:05.965776 | orchestrator | included: /ansible/roles/nova-cell/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 01:16:05.965789 | orchestrator | 2026-02-28 01:16:05.965802 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-02-28 01:16:05.965814 | orchestrator | Saturday 28 February 2026 01:09:57 +0000 (0:00:01.486) 0:05:37.599 ***** 2026-02-28 01:16:05.965842 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:16:05.965855 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:16:05.965867 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:16:05.965879 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:16:05.965891 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:16:05.965902 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:16:05.965914 | orchestrator | 2026-02-28 01:16:05.965927 | orchestrator | TASK [nova-cell : Get new Libvirt version] ************************************* 2026-02-28 01:16:05.965939 | orchestrator | Saturday 28 February 2026 01:09:57 +0000 (0:00:00.729) 0:05:38.329 ***** 2026-02-28 01:16:05.965951 | orchestrator | changed: [testbed-node-3] 2026-02-28 01:16:05.965963 | orchestrator | 2026-02-28 01:16:05.965975 | orchestrator | TASK [nova-cell : Cache new Libvirt version] *********************************** 2026-02-28 01:16:05.965987 | orchestrator | Saturday 28 February 2026 01:10:25 +0000 (0:00:27.877) 0:06:06.207 ***** 2026-02-28 01:16:05.965999 | orchestrator | ok: [testbed-node-3] 2026-02-28 01:16:05.966011 | orchestrator | 2026-02-28 01:16:05.966063 | orchestrator | TASK [Get nova_libvirt image info] ********************************************* 2026-02-28 01:16:05.966075 | orchestrator | Saturday 28 February 2026 01:10:27 +0000 (0:00:01.551) 0:06:07.758 ***** 2026-02-28 01:16:05.966087 | orchestrator | included: service-image-info for testbed-node-3 2026-02-28 01:16:05.966099 | orchestrator | 2026-02-28 01:16:05.966106 | orchestrator | TASK [service-image-info : community.docker.docker_image_info] ***************** 2026-02-28 01:16:05.966113 | orchestrator | Saturday 28 February 2026 01:10:28 +0000 (0:00:00.839) 0:06:08.597 ***** 2026-02-28 01:16:05.966126 | orchestrator | ok: [testbed-node-3] 2026-02-28 01:16:05.966134 | orchestrator | 2026-02-28 01:16:05.966142 | orchestrator | TASK [service-image-info : set_fact] ******************************************* 2026-02-28 01:16:05.966148 | orchestrator | Saturday 28 February 2026 01:10:32 +0000 (0:00:04.521) 0:06:13.119 ***** 2026-02-28 01:16:05.966155 | orchestrator | ok: [testbed-node-3] 2026-02-28 01:16:05.966162 | orchestrator | 2026-02-28 01:16:05.966169 | orchestrator | TASK [service-image-info : containers.podman.podman_image_info] **************** 2026-02-28 01:16:05.966176 | orchestrator | Saturday 28 February 2026 01:10:36 +0000 (0:00:03.826) 0:06:16.946 ***** 2026-02-28 01:16:05.966183 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:16:05.966191 | orchestrator | 2026-02-28 01:16:05.966197 | orchestrator | TASK [service-image-info : set_fact] ******************************************* 2026-02-28 01:16:05.966204 | orchestrator | Saturday 28 February 2026 01:10:38 +0000 (0:00:02.000) 0:06:18.947 ***** 2026-02-28 01:16:05.966211 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:16:05.966218 | orchestrator | 2026-02-28 01:16:05.966225 | orchestrator | TASK [nova-cell : Get container facts] ***************************************** 2026-02-28 01:16:05.966242 | orchestrator | Saturday 28 February 2026 01:10:40 +0000 (0:00:02.227) 0:06:21.174 ***** 2026-02-28 01:16:05.966249 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-02-28 01:16:05.966257 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-28 01:16:05.966264 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-28 01:16:05.966272 | orchestrator | 2026-02-28 01:16:05.966278 | orchestrator | TASK [nova-cell : Get current Libvirt version] ********************************* 2026-02-28 01:16:05.966285 | orchestrator | Saturday 28 February 2026 01:10:52 +0000 (0:00:11.767) 0:06:32.941 ***** 2026-02-28 01:16:05.966292 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-28 01:16:05.966299 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-28 01:16:05.966306 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-28 01:16:05.966314 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:16:05.966321 | orchestrator | 2026-02-28 01:16:05.966328 | orchestrator | TASK [nova-cell : Check that the new Libvirt version is >= current] ************ 2026-02-28 01:16:05.966335 | orchestrator | Saturday 28 February 2026 01:10:58 +0000 (0:00:06.533) 0:06:39.475 ***** 2026-02-28 01:16:05.966351 | orchestrator | skipping: [testbed-node-3] => (item={'result': False, 'changed': False, 'containers': {}, 'invocation': {'module_args': {'action': 'get_containers', 'container_engine': 'docker', 'name': ['nova_libvirt'], 'api_version': 'auto'}}, 'failed': False, 'item': 'testbed-node-3', 'ansible_loop_var': 'item'})  2026-02-28 01:16:05.966360 | orchestrator | skipping: [testbed-node-3] => (item={'result': False, 'changed': False, 'containers': {}, 'invocation': {'module_args': {'action': 'get_containers', 'container_engine': 'docker', 'name': ['nova_libvirt'], 'api_version': 'auto'}}, 'failed': False, 'item': 'testbed-node-4', 'ansible_loop_var': 'item'})  2026-02-28 01:16:05.966367 | orchestrator | skipping: [testbed-node-3] => (item={'result': False, 'changed': False, 'containers': {}, 'invocation': {'module_args': {'action': 'get_containers', 'container_engine': 'docker', 'name': ['nova_libvirt'], 'api_version': 'auto'}}, 'failed': False, 'item': 'testbed-node-5', 'ansible_loop_var': 'item'})  2026-02-28 01:16:05.966375 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:16:05.966383 | orchestrator | 2026-02-28 01:16:05.966394 | orchestrator | TASK [Load and persist br_netfilter module] ************************************ 2026-02-28 01:16:05.966407 | orchestrator | Saturday 28 February 2026 01:11:03 +0000 (0:00:04.124) 0:06:43.600 ***** 2026-02-28 01:16:05.966425 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:16:05.966460 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:16:05.966471 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:16:05.966482 | orchestrator | included: module-load for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-28 01:16:05.966492 | orchestrator | 2026-02-28 01:16:05.966502 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-02-28 01:16:05.966513 | orchestrator | Saturday 28 February 2026 01:11:04 +0000 (0:00:01.108) 0:06:44.708 ***** 2026-02-28 01:16:05.966524 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2026-02-28 01:16:05.966534 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2026-02-28 01:16:05.966544 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2026-02-28 01:16:05.966553 | orchestrator | 2026-02-28 01:16:05.966562 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-02-28 01:16:05.966571 | orchestrator | Saturday 28 February 2026 01:11:05 +0000 (0:00:01.101) 0:06:45.810 ***** 2026-02-28 01:16:05.966582 | orchestrator | changed: [testbed-node-3] => (item=br_netfilter) 2026-02-28 01:16:05.966593 | orchestrator | changed: [testbed-node-4] => (item=br_netfilter) 2026-02-28 01:16:05.966605 | orchestrator | changed: [testbed-node-5] => (item=br_netfilter) 2026-02-28 01:16:05.966616 | orchestrator | 2026-02-28 01:16:05.966627 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-02-28 01:16:05.966637 | orchestrator | Saturday 28 February 2026 01:11:06 +0000 (0:00:01.434) 0:06:47.245 ***** 2026-02-28 01:16:05.966648 | orchestrator | skipping: [testbed-node-3] => (item=br_netfilter)  2026-02-28 01:16:05.966654 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:16:05.966661 | orchestrator | skipping: [testbed-node-4] => (item=br_netfilter)  2026-02-28 01:16:05.966667 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:16:05.966673 | orchestrator | skipping: [testbed-node-5] => (item=br_netfilter)  2026-02-28 01:16:05.966679 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:16:05.966686 | orchestrator | 2026-02-28 01:16:05.966698 | orchestrator | TASK [nova-cell : Enable bridge-nf-call sysctl variables] ********************** 2026-02-28 01:16:05.966705 | orchestrator | Saturday 28 February 2026 01:11:07 +0000 (0:00:01.136) 0:06:48.382 ***** 2026-02-28 01:16:05.966711 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-28 01:16:05.966718 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-28 01:16:05.966724 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:16:05.966730 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-28 01:16:05.966737 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-28 01:16:05.966751 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:16:05.966757 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables) 2026-02-28 01:16:05.966764 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables) 2026-02-28 01:16:05.966777 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-28 01:16:05.966784 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-28 01:16:05.966790 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:16:05.966796 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-02-28 01:16:05.966803 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-02-28 01:16:05.966809 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables) 2026-02-28 01:16:05.966816 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-02-28 01:16:05.966822 | orchestrator | 2026-02-28 01:16:05.966828 | orchestrator | TASK [nova-cell : Install udev kolla kvm rules] ******************************** 2026-02-28 01:16:05.966835 | orchestrator | Saturday 28 February 2026 01:11:10 +0000 (0:00:02.588) 0:06:50.970 ***** 2026-02-28 01:16:05.966841 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:16:05.966847 | orchestrator | changed: [testbed-node-3] 2026-02-28 01:16:05.966853 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:16:05.966860 | orchestrator | changed: [testbed-node-4] 2026-02-28 01:16:05.966866 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:16:05.966872 | orchestrator | changed: [testbed-node-5] 2026-02-28 01:16:05.966879 | orchestrator | 2026-02-28 01:16:05.966885 | orchestrator | TASK [nova-cell : Mask qemu-kvm service] *************************************** 2026-02-28 01:16:05.966892 | orchestrator | Saturday 28 February 2026 01:11:12 +0000 (0:00:01.681) 0:06:52.652 ***** 2026-02-28 01:16:05.966898 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:16:05.966909 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:16:05.966919 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:16:05.966930 | orchestrator | changed: [testbed-node-4] 2026-02-28 01:16:05.966940 | orchestrator | changed: [testbed-node-5] 2026-02-28 01:16:05.966951 | orchestrator | changed: [testbed-node-3] 2026-02-28 01:16:05.966962 | orchestrator | 2026-02-28 01:16:05.966972 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-02-28 01:16:05.966984 | orchestrator | Saturday 28 February 2026 01:11:15 +0000 (0:00:02.870) 0:06:55.522 ***** 2026-02-28 01:16:05.966996 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-28 01:16:05.967011 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-28 01:16:05.967033 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-28 01:16:05.967046 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-28 01:16:05.967053 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-28 01:16:05.967061 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-28 01:16:05.967068 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-28 01:16:05.967075 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-28 01:16:05.967091 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-28 01:16:05.967104 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-28 01:16:05.967111 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-28 01:16:05.967118 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-28 01:16:05.967124 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-28 01:16:05.967131 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-28 01:16:05.967147 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-28 01:16:05.967154 | orchestrator | 2026-02-28 01:16:05.967160 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-02-28 01:16:05.967167 | orchestrator | Saturday 28 February 2026 01:11:19 +0000 (0:00:04.311) 0:06:59.833 ***** 2026-02-28 01:16:05.967174 | orchestrator | included: /ansible/roles/nova-cell/tasks/copy-certs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 01:16:05.967182 | orchestrator | 2026-02-28 01:16:05.967188 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-02-28 01:16:05.967195 | orchestrator | Saturday 28 February 2026 01:11:20 +0000 (0:00:01.553) 0:07:01.386 ***** 2026-02-28 01:16:05.967206 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-28 01:16:05.967213 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-28 01:16:05.967220 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-28 01:16:05.967231 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-28 01:16:05.967242 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-28 01:16:05.967255 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-28 01:16:05.967263 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-28 01:16:05.967269 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-28 01:16:05.967276 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-28 01:16:05.967290 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-28 01:16:05.967302 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-28 01:16:05.967317 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-28 01:16:05.967335 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-28 01:16:05.967346 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-28 01:16:05.967358 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-28 01:16:05.967381 | orchestrator | 2026-02-28 01:16:05.967392 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-02-28 01:16:05.967403 | orchestrator | Saturday 28 February 2026 01:11:24 +0000 (0:00:04.108) 0:07:05.495 ***** 2026-02-28 01:16:05.967414 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-28 01:16:05.967432 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-28 01:16:05.967466 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-28 01:16:05.967478 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:16:05.967490 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-28 01:16:05.967501 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-28 01:16:05.967524 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-28 01:16:05.967536 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-28 01:16:05.967546 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:16:05.967560 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-28 01:16:05.967579 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-28 01:16:05.967590 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:16:05.967602 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-28 01:16:05.967622 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-28 01:16:05.967634 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-28 01:16:05.967686 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-28 01:16:05.967700 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:16:05.967716 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-28 01:16:05.967729 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:16:05.967748 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-28 01:16:05.967759 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:16:05.967769 | orchestrator | 2026-02-28 01:16:05.967781 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-02-28 01:16:05.967792 | orchestrator | Saturday 28 February 2026 01:11:28 +0000 (0:00:03.014) 0:07:08.510 ***** 2026-02-28 01:16:05.967803 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-28 01:16:05.967822 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-28 01:16:05.967834 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-28 01:16:05.967850 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-28 01:16:05.967863 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:16:05.968381 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-28 01:16:05.968413 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-28 01:16:05.968462 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:16:05.968476 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-28 01:16:05.968489 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-28 01:16:05.968500 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-28 01:16:05.968518 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-28 01:16:05.968530 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:16:05.968570 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-28 01:16:05.968582 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:16:05.968593 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-28 01:16:05.968611 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-28 01:16:05.968622 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:16:05.968633 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-28 01:16:05.968645 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-28 01:16:05.968655 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:16:05.968665 | orchestrator | 2026-02-28 01:16:05.968677 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-02-28 01:16:05.968684 | orchestrator | Saturday 28 February 2026 01:11:31 +0000 (0:00:03.411) 0:07:11.921 ***** 2026-02-28 01:16:05.968691 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:16:05.968697 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:16:05.968703 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:16:05.968714 | orchestrator | included: /ansible/roles/nova-cell/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-28 01:16:05.968721 | orchestrator | 2026-02-28 01:16:05.968727 | orchestrator | TASK [nova-cell : Check nova keyring file] ************************************* 2026-02-28 01:16:05.968734 | orchestrator | Saturday 28 February 2026 01:11:32 +0000 (0:00:00.944) 0:07:12.865 ***** 2026-02-28 01:16:05.968740 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-02-28 01:16:05.968746 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-28 01:16:05.968753 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-02-28 01:16:05.968759 | orchestrator | 2026-02-28 01:16:05.968765 | orchestrator | TASK [nova-cell : Check cinder keyring file] *********************************** 2026-02-28 01:16:05.968771 | orchestrator | Saturday 28 February 2026 01:11:33 +0000 (0:00:01.471) 0:07:14.337 ***** 2026-02-28 01:16:05.968778 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-28 01:16:05.968784 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-02-28 01:16:05.968790 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-02-28 01:16:05.968801 | orchestrator | 2026-02-28 01:16:05.968808 | orchestrator | TASK [nova-cell : Extract nova key from file] ********************************** 2026-02-28 01:16:05.968833 | orchestrator | Saturday 28 February 2026 01:11:35 +0000 (0:00:01.255) 0:07:15.592 ***** 2026-02-28 01:16:05.968840 | orchestrator | ok: [testbed-node-3] 2026-02-28 01:16:05.968847 | orchestrator | ok: [testbed-node-4] 2026-02-28 01:16:05.968853 | orchestrator | ok: [testbed-node-5] 2026-02-28 01:16:05.968860 | orchestrator | 2026-02-28 01:16:05.968866 | orchestrator | TASK [nova-cell : Extract cinder key from file] ******************************** 2026-02-28 01:16:05.968872 | orchestrator | Saturday 28 February 2026 01:11:35 +0000 (0:00:00.716) 0:07:16.309 ***** 2026-02-28 01:16:05.968879 | orchestrator | ok: [testbed-node-3] 2026-02-28 01:16:05.968885 | orchestrator | ok: [testbed-node-4] 2026-02-28 01:16:05.968891 | orchestrator | ok: [testbed-node-5] 2026-02-28 01:16:05.968898 | orchestrator | 2026-02-28 01:16:05.968904 | orchestrator | TASK [nova-cell : Copy over ceph nova keyring file] **************************** 2026-02-28 01:16:05.968910 | orchestrator | Saturday 28 February 2026 01:11:36 +0000 (0:00:00.545) 0:07:16.855 ***** 2026-02-28 01:16:05.968917 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-02-28 01:16:05.968923 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-02-28 01:16:05.968930 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-02-28 01:16:05.968936 | orchestrator | 2026-02-28 01:16:05.968945 | orchestrator | TASK [nova-cell : Copy over ceph cinder keyring file] ************************** 2026-02-28 01:16:05.968952 | orchestrator | Saturday 28 February 2026 01:11:37 +0000 (0:00:01.431) 0:07:18.286 ***** 2026-02-28 01:16:05.968959 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-02-28 01:16:05.968966 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-02-28 01:16:05.968974 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-02-28 01:16:05.968982 | orchestrator | 2026-02-28 01:16:05.968989 | orchestrator | TASK [nova-cell : Copy over ceph.conf] ***************************************** 2026-02-28 01:16:05.968996 | orchestrator | Saturday 28 February 2026 01:11:38 +0000 (0:00:01.159) 0:07:19.446 ***** 2026-02-28 01:16:05.969003 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-02-28 01:16:05.969011 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-02-28 01:16:05.969021 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-02-28 01:16:05.969032 | orchestrator | changed: [testbed-node-3] => (item=nova-libvirt) 2026-02-28 01:16:05.969044 | orchestrator | changed: [testbed-node-4] => (item=nova-libvirt) 2026-02-28 01:16:05.969055 | orchestrator | changed: [testbed-node-5] => (item=nova-libvirt) 2026-02-28 01:16:05.969066 | orchestrator | 2026-02-28 01:16:05.969077 | orchestrator | TASK [nova-cell : Ensure /etc/ceph directory exists (host libvirt)] ************ 2026-02-28 01:16:05.969088 | orchestrator | Saturday 28 February 2026 01:11:43 +0000 (0:00:04.091) 0:07:23.538 ***** 2026-02-28 01:16:05.969099 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:16:05.969111 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:16:05.969123 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:16:05.969134 | orchestrator | 2026-02-28 01:16:05.969146 | orchestrator | TASK [nova-cell : Copy over ceph.conf (host libvirt)] ************************** 2026-02-28 01:16:05.969154 | orchestrator | Saturday 28 February 2026 01:11:43 +0000 (0:00:00.322) 0:07:23.861 ***** 2026-02-28 01:16:05.969160 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:16:05.969166 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:16:05.969172 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:16:05.969178 | orchestrator | 2026-02-28 01:16:05.969185 | orchestrator | TASK [nova-cell : Ensuring libvirt secrets directory exists] ******************* 2026-02-28 01:16:05.969191 | orchestrator | Saturday 28 February 2026 01:11:43 +0000 (0:00:00.542) 0:07:24.404 ***** 2026-02-28 01:16:05.969197 | orchestrator | changed: [testbed-node-3] 2026-02-28 01:16:05.969203 | orchestrator | changed: [testbed-node-4] 2026-02-28 01:16:05.969209 | orchestrator | changed: [testbed-node-5] 2026-02-28 01:16:05.969222 | orchestrator | 2026-02-28 01:16:05.969229 | orchestrator | TASK [nova-cell : Pushing nova secret xml for libvirt] ************************* 2026-02-28 01:16:05.969235 | orchestrator | Saturday 28 February 2026 01:11:45 +0000 (0:00:01.351) 0:07:25.755 ***** 2026-02-28 01:16:05.969242 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'ceph-ephemeral-nova', 'desc': 'Ceph Client Secret for Ephemeral Storage (Nova)', 'enabled': True}) 2026-02-28 01:16:05.969250 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'ceph-ephemeral-nova', 'desc': 'Ceph Client Secret for Ephemeral Storage (Nova)', 'enabled': True}) 2026-02-28 01:16:05.969256 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'ceph-ephemeral-nova', 'desc': 'Ceph Client Secret for Ephemeral Storage (Nova)', 'enabled': True}) 2026-02-28 01:16:05.969268 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'ceph-persistent-cinder', 'desc': 'Ceph Client Secret for Persistent Storage (Cinder)', 'enabled': 'yes'}) 2026-02-28 01:16:05.969275 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'ceph-persistent-cinder', 'desc': 'Ceph Client Secret for Persistent Storage (Cinder)', 'enabled': 'yes'}) 2026-02-28 01:16:05.969281 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'ceph-persistent-cinder', 'desc': 'Ceph Client Secret for Persistent Storage (Cinder)', 'enabled': 'yes'}) 2026-02-28 01:16:05.969287 | orchestrator | 2026-02-28 01:16:05.969294 | orchestrator | TASK [nova-cell : Pushing secrets key for libvirt] ***************************** 2026-02-28 01:16:05.969300 | orchestrator | Saturday 28 February 2026 01:11:48 +0000 (0:00:03.447) 0:07:29.203 ***** 2026-02-28 01:16:05.969307 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-02-28 01:16:05.969333 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-02-28 01:16:05.969341 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-02-28 01:16:05.969347 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-02-28 01:16:05.969354 | orchestrator | changed: [testbed-node-4] 2026-02-28 01:16:05.969360 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-02-28 01:16:05.969366 | orchestrator | changed: [testbed-node-3] 2026-02-28 01:16:05.969372 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-02-28 01:16:05.969378 | orchestrator | changed: [testbed-node-5] 2026-02-28 01:16:05.969384 | orchestrator | 2026-02-28 01:16:05.969391 | orchestrator | TASK [nova-cell : Include tasks from qemu_wrapper.yml] ************************* 2026-02-28 01:16:05.969397 | orchestrator | Saturday 28 February 2026 01:11:52 +0000 (0:00:03.427) 0:07:32.631 ***** 2026-02-28 01:16:05.969404 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:16:05.969410 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:16:05.969416 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:16:05.969422 | orchestrator | included: /ansible/roles/nova-cell/tasks/qemu_wrapper.yml for testbed-node-4, testbed-node-3, testbed-node-5 2026-02-28 01:16:05.969429 | orchestrator | 2026-02-28 01:16:05.969435 | orchestrator | TASK [nova-cell : Check qemu wrapper file] ************************************* 2026-02-28 01:16:05.969465 | orchestrator | Saturday 28 February 2026 01:11:54 +0000 (0:00:02.213) 0:07:34.845 ***** 2026-02-28 01:16:05.969476 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-28 01:16:05.969483 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-02-28 01:16:05.969490 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-02-28 01:16:05.969496 | orchestrator | 2026-02-28 01:16:05.969502 | orchestrator | TASK [nova-cell : Copy qemu wrapper] ******************************************* 2026-02-28 01:16:05.969509 | orchestrator | Saturday 28 February 2026 01:11:55 +0000 (0:00:01.337) 0:07:36.183 ***** 2026-02-28 01:16:05.969515 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:16:05.969521 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:16:05.969527 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:16:05.969543 | orchestrator | 2026-02-28 01:16:05.969550 | orchestrator | TASK [nova-cell : Check if policies shall be overwritten] ********************** 2026-02-28 01:16:05.969556 | orchestrator | Saturday 28 February 2026 01:11:56 +0000 (0:00:00.526) 0:07:36.709 ***** 2026-02-28 01:16:05.969562 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:16:05.969568 | orchestrator | 2026-02-28 01:16:05.969575 | orchestrator | TASK [nova-cell : Set nova policy file] **************************************** 2026-02-28 01:16:05.969581 | orchestrator | Saturday 28 February 2026 01:11:56 +0000 (0:00:00.157) 0:07:36.866 ***** 2026-02-28 01:16:05.969587 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:16:05.969594 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:16:05.969600 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:16:05.969606 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:16:05.969612 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:16:05.969618 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:16:05.969625 | orchestrator | 2026-02-28 01:16:05.969631 | orchestrator | TASK [nova-cell : Check for vendordata file] *********************************** 2026-02-28 01:16:05.969637 | orchestrator | Saturday 28 February 2026 01:11:57 +0000 (0:00:00.675) 0:07:37.542 ***** 2026-02-28 01:16:05.969643 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-28 01:16:05.969650 | orchestrator | 2026-02-28 01:16:05.969656 | orchestrator | TASK [nova-cell : Set vendordata file path] ************************************ 2026-02-28 01:16:05.969662 | orchestrator | Saturday 28 February 2026 01:11:57 +0000 (0:00:00.841) 0:07:38.383 ***** 2026-02-28 01:16:05.969668 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:16:05.969675 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:16:05.969681 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:16:05.969687 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:16:05.969693 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:16:05.969699 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:16:05.969706 | orchestrator | 2026-02-28 01:16:05.969712 | orchestrator | TASK [nova-cell : Copying over config.json files for services] ***************** 2026-02-28 01:16:05.969718 | orchestrator | Saturday 28 February 2026 01:11:58 +0000 (0:00:00.908) 0:07:39.291 ***** 2026-02-28 01:16:05.969730 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-28 01:16:05.969756 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-28 01:16:05.969764 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-28 01:16:05.969776 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-28 01:16:05.969784 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-28 01:16:05.969790 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-28 01:16:05.969801 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-28 01:16:05.969823 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-28 01:16:05.969830 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-28 01:16:05.969842 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-28 01:16:05.969849 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-28 01:16:05.969856 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-28 01:16:05.969863 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-28 01:16:05.969873 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-28 01:16:05.969896 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-28 01:16:05.969908 | orchestrator | 2026-02-28 01:16:05.969915 | orchestrator | TASK [nova-cell : Copying over nova.conf] ************************************** 2026-02-28 01:16:05.969921 | orchestrator | Saturday 28 February 2026 01:12:03 +0000 (0:00:05.067) 0:07:44.359 ***** 2026-02-28 01:16:05.969928 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-28 01:16:05.969934 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-28 01:16:05.969941 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-28 01:16:05.969951 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-28 01:16:05.969961 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-28 01:16:05.969973 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-28 01:16:05.969979 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-28 01:16:05.969986 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-28 01:16:05.969993 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-28 01:16:05.970003 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-28 01:16:05.970075 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-28 01:16:05.970085 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-28 01:16:05.970094 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-28 01:16:05.970105 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-28 01:16:05.970117 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-28 01:16:05.970128 | orchestrator | 2026-02-28 01:16:05.970139 | orchestrator | TASK [nova-cell : Copying over Nova compute provider config] ******************* 2026-02-28 01:16:05.970150 | orchestrator | Saturday 28 February 2026 01:12:14 +0000 (0:00:10.598) 0:07:54.958 ***** 2026-02-28 01:16:05.970161 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:16:05.970173 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:16:05.970184 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:16:05.970195 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:16:05.970206 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:16:05.970217 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:16:05.970231 | orchestrator | 2026-02-28 01:16:05.970243 | orchestrator | TASK [nova-cell : Copying over libvirt configuration] ************************** 2026-02-28 01:16:05.970249 | orchestrator | Saturday 28 February 2026 01:12:16 +0000 (0:00:02.443) 0:07:57.401 ***** 2026-02-28 01:16:05.970256 | orchestrator | changed: [testbed-node-3] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-02-28 01:16:05.970262 | orchestrator | changed: [testbed-node-4] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-02-28 01:16:05.970268 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-02-28 01:16:05.970274 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-02-28 01:16:05.970281 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-02-28 01:16:05.970287 | orchestrator | changed: [testbed-node-5] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-02-28 01:16:05.970302 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-02-28 01:16:05.970315 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-02-28 01:16:05.970326 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:16:05.970336 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:16:05.970346 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-02-28 01:16:05.970356 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:16:05.970367 | orchestrator | changed: [testbed-node-4] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-02-28 01:16:05.970378 | orchestrator | changed: [testbed-node-3] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-02-28 01:16:05.970389 | orchestrator | changed: [testbed-node-5] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-02-28 01:16:05.970399 | orchestrator | 2026-02-28 01:16:05.970410 | orchestrator | TASK [nova-cell : Copying over libvirt TLS keys] ******************************* 2026-02-28 01:16:05.970421 | orchestrator | Saturday 28 February 2026 01:12:22 +0000 (0:00:05.176) 0:08:02.577 ***** 2026-02-28 01:16:05.970432 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:16:05.970468 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:16:05.970479 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:16:05.970490 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:16:05.970501 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:16:05.970511 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:16:05.970522 | orchestrator | 2026-02-28 01:16:05.970533 | orchestrator | TASK [nova-cell : Copying over libvirt SASL configuration] ********************* 2026-02-28 01:16:05.970543 | orchestrator | Saturday 28 February 2026 01:12:22 +0000 (0:00:00.862) 0:08:03.440 ***** 2026-02-28 01:16:05.970554 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-02-28 01:16:05.970565 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-02-28 01:16:05.970575 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-02-28 01:16:05.970586 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-02-28 01:16:05.970598 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-02-28 01:16:05.970609 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-02-28 01:16:05.970620 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-02-28 01:16:05.970630 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-02-28 01:16:05.970641 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-02-28 01:16:05.970659 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-02-28 01:16:05.970669 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:16:05.970680 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-02-28 01:16:05.970690 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:16:05.970700 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-02-28 01:16:05.970711 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-02-28 01:16:05.970722 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-02-28 01:16:05.970733 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:16:05.970743 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-02-28 01:16:05.970754 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-02-28 01:16:05.970770 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-02-28 01:16:05.970780 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-02-28 01:16:05.970790 | orchestrator | 2026-02-28 01:16:05.970801 | orchestrator | TASK [nova-cell : Copying files for nova-ssh] ********************************** 2026-02-28 01:16:05.970811 | orchestrator | Saturday 28 February 2026 01:12:31 +0000 (0:00:09.066) 0:08:12.506 ***** 2026-02-28 01:16:05.970822 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-02-28 01:16:05.970833 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-02-28 01:16:05.970844 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-02-28 01:16:05.970855 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-02-28 01:16:05.970871 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-02-28 01:16:05.970881 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-02-28 01:16:05.970891 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-02-28 01:16:05.970902 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-02-28 01:16:05.970913 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-02-28 01:16:05.970923 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-02-28 01:16:05.970934 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-02-28 01:16:05.970944 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-02-28 01:16:05.970954 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-02-28 01:16:05.970965 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-02-28 01:16:05.970975 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:16:05.970986 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-02-28 01:16:05.970996 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:16:05.971007 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-02-28 01:16:05.971018 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-02-28 01:16:05.971042 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:16:05.971053 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-02-28 01:16:05.971064 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-02-28 01:16:05.971074 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-02-28 01:16:05.971085 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-02-28 01:16:05.971095 | orchestrator | changed: [testbed-node-4] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-02-28 01:16:05.971106 | orchestrator | changed: [testbed-node-5] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-02-28 01:16:05.971116 | orchestrator | changed: [testbed-node-3] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-02-28 01:16:05.971126 | orchestrator | 2026-02-28 01:16:05.971137 | orchestrator | TASK [nova-cell : Copying VMware vCenter CA file] ****************************** 2026-02-28 01:16:05.971147 | orchestrator | Saturday 28 February 2026 01:12:41 +0000 (0:00:09.194) 0:08:21.700 ***** 2026-02-28 01:16:05.971158 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:16:05.971168 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:16:05.971178 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:16:05.971189 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:16:05.971199 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:16:05.971208 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:16:05.971218 | orchestrator | 2026-02-28 01:16:05.971228 | orchestrator | TASK [nova-cell : Copying 'release' file for nova_compute] ********************* 2026-02-28 01:16:05.971239 | orchestrator | Saturday 28 February 2026 01:12:42 +0000 (0:00:00.997) 0:08:22.697 ***** 2026-02-28 01:16:05.971249 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:16:05.971260 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:16:05.971271 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:16:05.971282 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:16:05.971293 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:16:05.971304 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:16:05.971314 | orchestrator | 2026-02-28 01:16:05.971324 | orchestrator | TASK [nova-cell : Generating 'hostnqn' file for nova_compute] ****************** 2026-02-28 01:16:05.971335 | orchestrator | Saturday 28 February 2026 01:12:42 +0000 (0:00:00.717) 0:08:23.415 ***** 2026-02-28 01:16:05.971346 | orchestrator | changed: [testbed-node-3] 2026-02-28 01:16:05.971356 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:16:05.971367 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:16:05.971378 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:16:05.971389 | orchestrator | changed: [testbed-node-4] 2026-02-28 01:16:05.971400 | orchestrator | changed: [testbed-node-5] 2026-02-28 01:16:05.971411 | orchestrator | 2026-02-28 01:16:05.971421 | orchestrator | TASK [nova-cell : Copying over existing policy file] *************************** 2026-02-28 01:16:05.971431 | orchestrator | Saturday 28 February 2026 01:12:45 +0000 (0:00:03.078) 0:08:26.493 ***** 2026-02-28 01:16:05.971476 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-28 01:16:05.971488 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-28 01:16:05.971508 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-28 01:16:05.971518 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:16:05.971525 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-28 01:16:05.971532 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-28 01:16:05.971542 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-28 01:16:05.971549 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:16:05.971561 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-28 01:16:05.971573 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-28 01:16:05.971580 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-28 01:16:05.971586 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:16:05.971593 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-28 01:16:05.971599 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-28 01:16:05.971606 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:16:05.971615 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-28 01:16:05.971627 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-28 01:16:05.971639 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:16:05.971646 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-28 01:16:05.971653 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-28 01:16:05.971659 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:16:05.971665 | orchestrator | 2026-02-28 01:16:05.971672 | orchestrator | TASK [nova-cell : Copying over vendordata file to containers] ****************** 2026-02-28 01:16:05.971678 | orchestrator | Saturday 28 February 2026 01:12:47 +0000 (0:00:01.566) 0:08:28.060 ***** 2026-02-28 01:16:05.971684 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2026-02-28 01:16:05.971693 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-02-28 01:16:05.971703 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:16:05.971713 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2026-02-28 01:16:05.971724 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-02-28 01:16:05.971734 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:16:05.971745 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2026-02-28 01:16:05.971757 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-02-28 01:16:05.971768 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:16:05.971778 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-02-28 01:16:05.971787 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-02-28 01:16:05.971794 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:16:05.971800 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-02-28 01:16:05.971806 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-02-28 01:16:05.971812 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:16:05.971818 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-02-28 01:16:05.971825 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-02-28 01:16:05.971831 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:16:05.971837 | orchestrator | 2026-02-28 01:16:05.971844 | orchestrator | TASK [service-check-containers : nova_cell | Check containers] ***************** 2026-02-28 01:16:05.971850 | orchestrator | Saturday 28 February 2026 01:12:48 +0000 (0:00:00.994) 0:08:29.054 ***** 2026-02-28 01:16:05.971866 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-28 01:16:05.971889 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-28 01:16:05.971902 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-28 01:16:05.971913 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-28 01:16:05.971925 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-28 01:16:05.971936 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-28 01:16:05.971958 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-28 01:16:05.971973 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-28 01:16:05.971980 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-28 01:16:05.971987 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-28 01:16:05.971993 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-28 01:16:05.972000 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-28 01:16:05.972018 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-28 01:16:05.972029 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-28 01:16:05.972036 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-28 01:16:05.972042 | orchestrator | 2026-02-28 01:16:05.972049 | orchestrator | TASK [service-check-containers : nova_cell | Notify handlers to restart containers] *** 2026-02-28 01:16:05.972055 | orchestrator | Saturday 28 February 2026 01:12:51 +0000 (0:00:03.029) 0:08:32.083 ***** 2026-02-28 01:16:05.972062 | orchestrator | changed: [testbed-node-3] => { 2026-02-28 01:16:05.972068 | orchestrator |  "msg": "Notifying handlers" 2026-02-28 01:16:05.972075 | orchestrator | } 2026-02-28 01:16:05.972081 | orchestrator | changed: [testbed-node-4] => { 2026-02-28 01:16:05.972088 | orchestrator |  "msg": "Notifying handlers" 2026-02-28 01:16:05.972094 | orchestrator | } 2026-02-28 01:16:05.972101 | orchestrator | changed: [testbed-node-5] => { 2026-02-28 01:16:05.972107 | orchestrator |  "msg": "Notifying handlers" 2026-02-28 01:16:05.972114 | orchestrator | } 2026-02-28 01:16:05.972120 | orchestrator | changed: [testbed-node-0] => { 2026-02-28 01:16:05.972127 | orchestrator |  "msg": "Notifying handlers" 2026-02-28 01:16:05.972133 | orchestrator | } 2026-02-28 01:16:05.972139 | orchestrator | changed: [testbed-node-1] => { 2026-02-28 01:16:05.972145 | orchestrator |  "msg": "Notifying handlers" 2026-02-28 01:16:05.972152 | orchestrator | } 2026-02-28 01:16:05.972158 | orchestrator | changed: [testbed-node-2] => { 2026-02-28 01:16:05.972164 | orchestrator |  "msg": "Notifying handlers" 2026-02-28 01:16:05.972170 | orchestrator | } 2026-02-28 01:16:05.972177 | orchestrator | 2026-02-28 01:16:05.972183 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-28 01:16:05.972190 | orchestrator | Saturday 28 February 2026 01:12:52 +0000 (0:00:00.920) 0:08:33.004 ***** 2026-02-28 01:16:05.972201 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-28 01:16:05.972208 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-28 01:16:05.972218 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-28 01:16:05.972224 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:16:05.972236 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-28 01:16:05.972243 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-28 01:16:05.972250 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-28 01:16:05.972263 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:16:05.972270 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-28 01:16:05.972280 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-28 01:16:05.972291 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-28 01:16:05.972298 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:16:05.972304 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-28 01:16:05.972311 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-28 01:16:05.972321 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:16:05.972328 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-28 01:16:05.972335 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-28 01:16:05.972341 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:16:05.972351 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-28 01:16:05.972361 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-28 01:16:05.972368 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:16:05.972374 | orchestrator | 2026-02-28 01:16:05.972381 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-02-28 01:16:05.972387 | orchestrator | Saturday 28 February 2026 01:12:54 +0000 (0:00:02.312) 0:08:35.317 ***** 2026-02-28 01:16:05.972394 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:16:05.972400 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:16:05.972406 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:16:05.972412 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:16:05.972419 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:16:05.972425 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:16:05.972431 | orchestrator | 2026-02-28 01:16:05.972485 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-02-28 01:16:05.972494 | orchestrator | Saturday 28 February 2026 01:12:55 +0000 (0:00:00.708) 0:08:36.025 ***** 2026-02-28 01:16:05.972501 | orchestrator | 2026-02-28 01:16:05.972528 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-02-28 01:16:05.972535 | orchestrator | Saturday 28 February 2026 01:12:55 +0000 (0:00:00.148) 0:08:36.173 ***** 2026-02-28 01:16:05.972548 | orchestrator | 2026-02-28 01:16:05.972554 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-02-28 01:16:05.972560 | orchestrator | Saturday 28 February 2026 01:12:55 +0000 (0:00:00.138) 0:08:36.312 ***** 2026-02-28 01:16:05.972567 | orchestrator | 2026-02-28 01:16:05.972573 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-02-28 01:16:05.972579 | orchestrator | Saturday 28 February 2026 01:12:56 +0000 (0:00:00.312) 0:08:36.624 ***** 2026-02-28 01:16:05.972586 | orchestrator | 2026-02-28 01:16:05.972592 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-02-28 01:16:05.972598 | orchestrator | Saturday 28 February 2026 01:12:56 +0000 (0:00:00.151) 0:08:36.776 ***** 2026-02-28 01:16:05.972605 | orchestrator | 2026-02-28 01:16:05.972611 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-02-28 01:16:05.972617 | orchestrator | Saturday 28 February 2026 01:12:56 +0000 (0:00:00.150) 0:08:36.926 ***** 2026-02-28 01:16:05.972624 | orchestrator | 2026-02-28 01:16:05.972630 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-conductor container] ***************** 2026-02-28 01:16:05.972636 | orchestrator | Saturday 28 February 2026 01:12:56 +0000 (0:00:00.135) 0:08:37.062 ***** 2026-02-28 01:16:05.972643 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:16:05.972650 | orchestrator | changed: [testbed-node-1] 2026-02-28 01:16:05.972661 | orchestrator | changed: [testbed-node-2] 2026-02-28 01:16:05.972671 | orchestrator | 2026-02-28 01:16:05.972681 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-novncproxy container] **************** 2026-02-28 01:16:05.972691 | orchestrator | Saturday 28 February 2026 01:13:04 +0000 (0:00:08.152) 0:08:45.215 ***** 2026-02-28 01:16:05.972702 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:16:05.972713 | orchestrator | changed: [testbed-node-1] 2026-02-28 01:16:05.972722 | orchestrator | changed: [testbed-node-2] 2026-02-28 01:16:05.972729 | orchestrator | 2026-02-28 01:16:05.972735 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-ssh container] *********************** 2026-02-28 01:16:05.972741 | orchestrator | Saturday 28 February 2026 01:13:21 +0000 (0:00:16.860) 0:09:02.075 ***** 2026-02-28 01:16:05.972748 | orchestrator | changed: [testbed-node-3] 2026-02-28 01:16:05.972754 | orchestrator | changed: [testbed-node-4] 2026-02-28 01:16:05.972761 | orchestrator | changed: [testbed-node-5] 2026-02-28 01:16:05.972767 | orchestrator | 2026-02-28 01:16:05.972774 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-libvirt container] ******************* 2026-02-28 01:16:05.972780 | orchestrator | Saturday 28 February 2026 01:13:41 +0000 (0:00:20.100) 0:09:22.175 ***** 2026-02-28 01:16:05.972786 | orchestrator | changed: [testbed-node-3] 2026-02-28 01:16:05.972793 | orchestrator | changed: [testbed-node-5] 2026-02-28 01:16:05.972799 | orchestrator | changed: [testbed-node-4] 2026-02-28 01:16:05.972805 | orchestrator | 2026-02-28 01:16:05.972812 | orchestrator | RUNNING HANDLER [nova-cell : Checking libvirt container is ready] ************** 2026-02-28 01:16:05.972818 | orchestrator | Saturday 28 February 2026 01:14:20 +0000 (0:00:38.857) 0:10:01.033 ***** 2026-02-28 01:16:05.972824 | orchestrator | changed: [testbed-node-4] 2026-02-28 01:16:05.972831 | orchestrator | changed: [testbed-node-3] 2026-02-28 01:16:05.972837 | orchestrator | changed: [testbed-node-5] 2026-02-28 01:16:05.972843 | orchestrator | 2026-02-28 01:16:05.972850 | orchestrator | RUNNING HANDLER [nova-cell : Create libvirt SASL user] ************************* 2026-02-28 01:16:05.972856 | orchestrator | Saturday 28 February 2026 01:14:21 +0000 (0:00:00.810) 0:10:01.843 ***** 2026-02-28 01:16:05.972862 | orchestrator | changed: [testbed-node-3] 2026-02-28 01:16:05.972869 | orchestrator | changed: [testbed-node-4] 2026-02-28 01:16:05.972875 | orchestrator | changed: [testbed-node-5] 2026-02-28 01:16:05.972881 | orchestrator | 2026-02-28 01:16:05.972888 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-compute container] ******************* 2026-02-28 01:16:05.972894 | orchestrator | Saturday 28 February 2026 01:14:22 +0000 (0:00:00.798) 0:10:02.641 ***** 2026-02-28 01:16:05.972900 | orchestrator | changed: [testbed-node-4] 2026-02-28 01:16:05.972907 | orchestrator | changed: [testbed-node-5] 2026-02-28 01:16:05.972918 | orchestrator | changed: [testbed-node-3] 2026-02-28 01:16:05.972924 | orchestrator | 2026-02-28 01:16:05.972935 | orchestrator | RUNNING HANDLER [nova-cell : Wait for nova-compute services to update service versions] *** 2026-02-28 01:16:05.972942 | orchestrator | Saturday 28 February 2026 01:14:45 +0000 (0:00:23.314) 0:10:25.955 ***** 2026-02-28 01:16:05.972948 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:16:05.972955 | orchestrator | 2026-02-28 01:16:05.972961 | orchestrator | TASK [nova-cell : Waiting for nova-compute services to register themselves] **** 2026-02-28 01:16:05.972967 | orchestrator | Saturday 28 February 2026 01:14:45 +0000 (0:00:00.150) 0:10:26.106 ***** 2026-02-28 01:16:05.972974 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:16:05.972980 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:16:05.972986 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:16:05.972993 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:16:05.972999 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:16:05.973006 | orchestrator | FAILED - RETRYING: [testbed-node-4 -> testbed-node-0]: Waiting for nova-compute services to register themselves (20 retries left). 2026-02-28 01:16:05.973018 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-02-28 01:16:05.973024 | orchestrator | 2026-02-28 01:16:05.973031 | orchestrator | TASK [nova-cell : Fail if nova-compute service failed to register] ************* 2026-02-28 01:16:05.973037 | orchestrator | Saturday 28 February 2026 01:15:09 +0000 (0:00:23.814) 0:10:49.920 ***** 2026-02-28 01:16:05.973044 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:16:05.973050 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:16:05.973056 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:16:05.973062 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:16:05.973069 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:16:05.973075 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:16:05.973081 | orchestrator | 2026-02-28 01:16:05.973087 | orchestrator | TASK [nova-cell : Include discover_computes.yml] ******************************* 2026-02-28 01:16:05.973094 | orchestrator | Saturday 28 February 2026 01:15:20 +0000 (0:00:11.070) 0:11:00.991 ***** 2026-02-28 01:16:05.973100 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:16:05.973107 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:16:05.973113 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:16:05.973119 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:16:05.973125 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:16:05.973132 | orchestrator | included: /ansible/roles/nova-cell/tasks/discover_computes.yml for testbed-node-4 2026-02-28 01:16:05.973138 | orchestrator | 2026-02-28 01:16:05.973145 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-02-28 01:16:05.973151 | orchestrator | Saturday 28 February 2026 01:15:25 +0000 (0:00:05.468) 0:11:06.459 ***** 2026-02-28 01:16:05.973157 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-02-28 01:16:05.973164 | orchestrator | 2026-02-28 01:16:05.973170 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-02-28 01:16:05.973176 | orchestrator | Saturday 28 February 2026 01:15:40 +0000 (0:00:14.413) 0:11:20.872 ***** 2026-02-28 01:16:05.973183 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-02-28 01:16:05.973189 | orchestrator | 2026-02-28 01:16:05.973195 | orchestrator | TASK [nova-cell : Fail if cell settings not found] ***************************** 2026-02-28 01:16:05.973202 | orchestrator | Saturday 28 February 2026 01:15:42 +0000 (0:00:01.967) 0:11:22.839 ***** 2026-02-28 01:16:05.973208 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:16:05.973215 | orchestrator | 2026-02-28 01:16:05.973226 | orchestrator | TASK [nova-cell : Discover nova hosts] ***************************************** 2026-02-28 01:16:05.973236 | orchestrator | Saturday 28 February 2026 01:15:44 +0000 (0:00:01.971) 0:11:24.811 ***** 2026-02-28 01:16:05.973247 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-02-28 01:16:05.973257 | orchestrator | 2026-02-28 01:16:05.973267 | orchestrator | PLAY [Refresh nova scheduler cell cache] *************************************** 2026-02-28 01:16:05.973285 | orchestrator | 2026-02-28 01:16:05.973295 | orchestrator | TASK [nova : Refresh cell cache in nova scheduler] ***************************** 2026-02-28 01:16:05.973307 | orchestrator | Saturday 28 February 2026 01:15:57 +0000 (0:00:13.417) 0:11:38.229 ***** 2026-02-28 01:16:05.973318 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:16:05.973327 | orchestrator | changed: [testbed-node-2] 2026-02-28 01:16:05.973337 | orchestrator | changed: [testbed-node-1] 2026-02-28 01:16:05.973349 | orchestrator | 2026-02-28 01:16:05.973359 | orchestrator | PLAY [Reload global Nova super conductor services] ***************************** 2026-02-28 01:16:05.973370 | orchestrator | 2026-02-28 01:16:05.973381 | orchestrator | TASK [nova : Reload nova super conductor services to remove RPC version pin] *** 2026-02-28 01:16:05.973391 | orchestrator | Saturday 28 February 2026 01:15:59 +0000 (0:00:01.409) 0:11:39.638 ***** 2026-02-28 01:16:05.973402 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:16:05.973413 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:16:05.973423 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:16:05.973434 | orchestrator | 2026-02-28 01:16:05.973462 | orchestrator | PLAY [Reload Nova cell services] *********************************************** 2026-02-28 01:16:05.973473 | orchestrator | 2026-02-28 01:16:05.973483 | orchestrator | TASK [nova-cell : Reload nova cell services to remove RPC version cap] ********* 2026-02-28 01:16:05.973492 | orchestrator | Saturday 28 February 2026 01:15:59 +0000 (0:00:00.780) 0:11:40.419 ***** 2026-02-28 01:16:05.973503 | orchestrator | skipping: [testbed-node-3] => (item=nova-conductor)  2026-02-28 01:16:05.973514 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2026-02-28 01:16:05.973525 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-02-28 01:16:05.973535 | orchestrator | skipping: [testbed-node-3] => (item=nova-novncproxy)  2026-02-28 01:16:05.973545 | orchestrator | skipping: [testbed-node-3] => (item=nova-serialproxy)  2026-02-28 01:16:05.973552 | orchestrator | skipping: [testbed-node-3] => (item=nova-spicehtml5proxy)  2026-02-28 01:16:05.973558 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:16:05.973564 | orchestrator | skipping: [testbed-node-4] => (item=nova-conductor)  2026-02-28 01:16:05.973571 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2026-02-28 01:16:05.973577 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-02-28 01:16:05.973590 | orchestrator | skipping: [testbed-node-4] => (item=nova-novncproxy)  2026-02-28 01:16:05.973597 | orchestrator | skipping: [testbed-node-4] => (item=nova-serialproxy)  2026-02-28 01:16:05.973603 | orchestrator | skipping: [testbed-node-4] => (item=nova-spicehtml5proxy)  2026-02-28 01:16:05.973610 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:16:05.973616 | orchestrator | skipping: [testbed-node-5] => (item=nova-conductor)  2026-02-28 01:16:05.973622 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2026-02-28 01:16:05.973629 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-02-28 01:16:05.973635 | orchestrator | skipping: [testbed-node-5] => (item=nova-novncproxy)  2026-02-28 01:16:05.973641 | orchestrator | skipping: [testbed-node-5] => (item=nova-serialproxy)  2026-02-28 01:16:05.973648 | orchestrator | skipping: [testbed-node-5] => (item=nova-spicehtml5proxy)  2026-02-28 01:16:05.973654 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:16:05.973660 | orchestrator | skipping: [testbed-node-0] => (item=nova-conductor)  2026-02-28 01:16:05.973673 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-02-28 01:16:05.973679 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-02-28 01:16:05.973686 | orchestrator | skipping: [testbed-node-0] => (item=nova-novncproxy)  2026-02-28 01:16:05.973692 | orchestrator | skipping: [testbed-node-0] => (item=nova-serialproxy)  2026-02-28 01:16:05.973699 | orchestrator | skipping: [testbed-node-0] => (item=nova-spicehtml5proxy)  2026-02-28 01:16:05.973705 | orchestrator | skipping: [testbed-node-1] => (item=nova-conductor)  2026-02-28 01:16:05.973711 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-02-28 01:16:05.973724 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-02-28 01:16:05.973730 | orchestrator | skipping: [testbed-node-1] => (item=nova-novncproxy)  2026-02-28 01:16:05.973737 | orchestrator | skipping: [testbed-node-1] => (item=nova-serialproxy)  2026-02-28 01:16:05.973743 | orchestrator | skipping: [testbed-node-1] => (item=nova-spicehtml5proxy)  2026-02-28 01:16:05.973749 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:16:05.973755 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:16:05.973762 | orchestrator | skipping: [testbed-node-2] => (item=nova-conductor)  2026-02-28 01:16:05.973768 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-02-28 01:16:05.973775 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-02-28 01:16:05.973781 | orchestrator | skipping: [testbed-node-2] => (item=nova-novncproxy)  2026-02-28 01:16:05.973787 | orchestrator | skipping: [testbed-node-2] => (item=nova-serialproxy)  2026-02-28 01:16:05.973793 | orchestrator | skipping: [testbed-node-2] => (item=nova-spicehtml5proxy)  2026-02-28 01:16:05.973800 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:16:05.973806 | orchestrator | 2026-02-28 01:16:05.973813 | orchestrator | PLAY [Reload global Nova API services] ***************************************** 2026-02-28 01:16:05.973819 | orchestrator | 2026-02-28 01:16:05.973825 | orchestrator | TASK [nova : Reload nova API services to remove RPC version pin] *************** 2026-02-28 01:16:05.973832 | orchestrator | Saturday 28 February 2026 01:16:01 +0000 (0:00:01.505) 0:11:41.925 ***** 2026-02-28 01:16:05.973838 | orchestrator | skipping: [testbed-node-0] => (item=nova-scheduler)  2026-02-28 01:16:05.973844 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2026-02-28 01:16:05.973851 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:16:05.973857 | orchestrator | skipping: [testbed-node-1] => (item=nova-scheduler)  2026-02-28 01:16:05.973864 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2026-02-28 01:16:05.973870 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:16:05.973876 | orchestrator | skipping: [testbed-node-2] => (item=nova-scheduler)  2026-02-28 01:16:05.973883 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2026-02-28 01:16:05.973889 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:16:05.973895 | orchestrator | 2026-02-28 01:16:05.973901 | orchestrator | PLAY [Run Nova API online data migrations] ************************************* 2026-02-28 01:16:05.973908 | orchestrator | 2026-02-28 01:16:05.973914 | orchestrator | TASK [nova : Run Nova API online database migrations] ************************** 2026-02-28 01:16:05.973920 | orchestrator | Saturday 28 February 2026 01:16:02 +0000 (0:00:00.839) 0:11:42.764 ***** 2026-02-28 01:16:05.973927 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:16:05.973933 | orchestrator | 2026-02-28 01:16:05.973939 | orchestrator | PLAY [Run Nova cell online data migrations] ************************************ 2026-02-28 01:16:05.973946 | orchestrator | 2026-02-28 01:16:05.973952 | orchestrator | TASK [nova-cell : Run Nova cell online database migrations] ******************** 2026-02-28 01:16:05.973958 | orchestrator | Saturday 28 February 2026 01:16:03 +0000 (0:00:00.973) 0:11:43.738 ***** 2026-02-28 01:16:05.973965 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:16:05.973971 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:16:05.973977 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:16:05.973983 | orchestrator | 2026-02-28 01:16:05.973990 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-28 01:16:05.973996 | orchestrator | testbed-manager : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-28 01:16:05.974004 | orchestrator | testbed-node-0 : ok=59  changed=39  unreachable=0 failed=0 skipped=49  rescued=0 ignored=0 2026-02-28 01:16:05.974010 | orchestrator | testbed-node-1 : ok=32  changed=23  unreachable=0 failed=0 skipped=56  rescued=0 ignored=0 2026-02-28 01:16:05.974047 | orchestrator | testbed-node-2 : ok=32  changed=23  unreachable=0 failed=0 skipped=56  rescued=0 ignored=0 2026-02-28 01:16:05.974064 | orchestrator | testbed-node-3 : ok=46  changed=29  unreachable=0 failed=0 skipped=27  rescued=0 ignored=0 2026-02-28 01:16:05.974071 | orchestrator | testbed-node-4 : ok=44  changed=28  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-02-28 01:16:05.974077 | orchestrator | testbed-node-5 : ok=39  changed=28  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-02-28 01:16:05.974083 | orchestrator | 2026-02-28 01:16:05.974090 | orchestrator | 2026-02-28 01:16:05.974096 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-28 01:16:05.974103 | orchestrator | Saturday 28 February 2026 01:16:03 +0000 (0:00:00.519) 0:11:44.257 ***** 2026-02-28 01:16:05.974109 | orchestrator | =============================================================================== 2026-02-28 01:16:05.974119 | orchestrator | nova-cell : Restart nova-libvirt container ----------------------------- 38.86s 2026-02-28 01:16:05.974126 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 32.05s 2026-02-28 01:16:05.974132 | orchestrator | nova : Restart nova-scheduler container -------------------------------- 28.64s 2026-02-28 01:16:05.974138 | orchestrator | nova-cell : Get new Libvirt version ------------------------------------ 27.88s 2026-02-28 01:16:05.974145 | orchestrator | nova-cell : Waiting for nova-compute services to register themselves --- 23.81s 2026-02-28 01:16:05.974151 | orchestrator | nova-cell : Running Nova cell bootstrap container ---------------------- 23.69s 2026-02-28 01:16:05.974157 | orchestrator | nova-cell : Restart nova-compute container ----------------------------- 23.31s 2026-02-28 01:16:05.974163 | orchestrator | nova-cell : Restart nova-ssh container --------------------------------- 20.10s 2026-02-28 01:16:05.974170 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 18.96s 2026-02-28 01:16:05.974176 | orchestrator | nova-cell : Restart nova-novncproxy container -------------------------- 16.86s 2026-02-28 01:16:05.974182 | orchestrator | nova : Create cell0 mappings ------------------------------------------- 16.09s 2026-02-28 01:16:05.974189 | orchestrator | nova : Copying over nova.conf ------------------------------------------ 16.00s 2026-02-28 01:16:05.974195 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 14.41s 2026-02-28 01:16:05.974202 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 14.15s 2026-02-28 01:16:05.974208 | orchestrator | service-rabbitmq : nova | Ensure RabbitMQ users exist ------------------ 14.01s 2026-02-28 01:16:05.974214 | orchestrator | nova-cell : Discover nova hosts ---------------------------------------- 13.42s 2026-02-28 01:16:05.974221 | orchestrator | nova-cell : Create cell ------------------------------------------------ 13.18s 2026-02-28 01:16:05.974227 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 12.60s 2026-02-28 01:16:05.974233 | orchestrator | nova : Restart nova-api container -------------------------------------- 12.27s 2026-02-28 01:16:05.974240 | orchestrator | nova-cell : Get container facts ---------------------------------------- 11.77s 2026-02-28 01:16:09.007391 | orchestrator | 2026-02-28 01:16:09 | INFO  | Task fd7ad46f-1ac3-4fcb-b16b-dca0c60c95c3 is in state STARTED 2026-02-28 01:16:09.007483 | orchestrator | 2026-02-28 01:16:09 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:16:12.056814 | orchestrator | 2026-02-28 01:16:12 | INFO  | Task fd7ad46f-1ac3-4fcb-b16b-dca0c60c95c3 is in state STARTED 2026-02-28 01:16:12.056907 | orchestrator | 2026-02-28 01:16:12 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:16:15.098933 | orchestrator | 2026-02-28 01:16:15 | INFO  | Task fd7ad46f-1ac3-4fcb-b16b-dca0c60c95c3 is in state STARTED 2026-02-28 01:16:15.099026 | orchestrator | 2026-02-28 01:16:15 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:16:18.150106 | orchestrator | 2026-02-28 01:16:18 | INFO  | Task fd7ad46f-1ac3-4fcb-b16b-dca0c60c95c3 is in state STARTED 2026-02-28 01:16:18.150203 | orchestrator | 2026-02-28 01:16:18 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:16:21.183406 | orchestrator | 2026-02-28 01:16:21 | INFO  | Task fd7ad46f-1ac3-4fcb-b16b-dca0c60c95c3 is in state STARTED 2026-02-28 01:16:21.183529 | orchestrator | 2026-02-28 01:16:21 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:16:24.233753 | orchestrator | 2026-02-28 01:16:24 | INFO  | Task fd7ad46f-1ac3-4fcb-b16b-dca0c60c95c3 is in state STARTED 2026-02-28 01:16:24.233850 | orchestrator | 2026-02-28 01:16:24 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:16:27.274879 | orchestrator | 2026-02-28 01:16:27 | INFO  | Task fd7ad46f-1ac3-4fcb-b16b-dca0c60c95c3 is in state STARTED 2026-02-28 01:16:27.274961 | orchestrator | 2026-02-28 01:16:27 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:16:30.319640 | orchestrator | 2026-02-28 01:16:30 | INFO  | Task fd7ad46f-1ac3-4fcb-b16b-dca0c60c95c3 is in state STARTED 2026-02-28 01:16:30.319725 | orchestrator | 2026-02-28 01:16:30 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:16:33.361238 | orchestrator | 2026-02-28 01:16:33 | INFO  | Task fd7ad46f-1ac3-4fcb-b16b-dca0c60c95c3 is in state STARTED 2026-02-28 01:16:33.361358 | orchestrator | 2026-02-28 01:16:33 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:16:36.421651 | orchestrator | 2026-02-28 01:16:36 | INFO  | Task fd7ad46f-1ac3-4fcb-b16b-dca0c60c95c3 is in state STARTED 2026-02-28 01:16:36.421742 | orchestrator | 2026-02-28 01:16:36 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:16:39.468478 | orchestrator | 2026-02-28 01:16:39 | INFO  | Task fd7ad46f-1ac3-4fcb-b16b-dca0c60c95c3 is in state STARTED 2026-02-28 01:16:39.468560 | orchestrator | 2026-02-28 01:16:39 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:16:42.515395 | orchestrator | 2026-02-28 01:16:42 | INFO  | Task fd7ad46f-1ac3-4fcb-b16b-dca0c60c95c3 is in state STARTED 2026-02-28 01:16:42.515503 | orchestrator | 2026-02-28 01:16:42 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:16:45.560588 | orchestrator | 2026-02-28 01:16:45 | INFO  | Task fd7ad46f-1ac3-4fcb-b16b-dca0c60c95c3 is in state STARTED 2026-02-28 01:16:45.560677 | orchestrator | 2026-02-28 01:16:45 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:16:48.610880 | orchestrator | 2026-02-28 01:16:48 | INFO  | Task fd7ad46f-1ac3-4fcb-b16b-dca0c60c95c3 is in state STARTED 2026-02-28 01:16:48.610985 | orchestrator | 2026-02-28 01:16:48 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:16:51.660479 | orchestrator | 2026-02-28 01:16:51 | INFO  | Task fd7ad46f-1ac3-4fcb-b16b-dca0c60c95c3 is in state STARTED 2026-02-28 01:16:51.660572 | orchestrator | 2026-02-28 01:16:51 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:16:54.702129 | orchestrator | 2026-02-28 01:16:54 | INFO  | Task fd7ad46f-1ac3-4fcb-b16b-dca0c60c95c3 is in state STARTED 2026-02-28 01:16:54.702211 | orchestrator | 2026-02-28 01:16:54 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:16:57.754116 | orchestrator | 2026-02-28 01:16:57 | INFO  | Task fd7ad46f-1ac3-4fcb-b16b-dca0c60c95c3 is in state STARTED 2026-02-28 01:16:57.754232 | orchestrator | 2026-02-28 01:16:57 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:17:00.802919 | orchestrator | 2026-02-28 01:17:00 | INFO  | Task fd7ad46f-1ac3-4fcb-b16b-dca0c60c95c3 is in state STARTED 2026-02-28 01:17:00.803043 | orchestrator | 2026-02-28 01:17:00 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:17:03.862233 | orchestrator | 2026-02-28 01:17:03 | INFO  | Task fd7ad46f-1ac3-4fcb-b16b-dca0c60c95c3 is in state STARTED 2026-02-28 01:17:03.862326 | orchestrator | 2026-02-28 01:17:03 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:17:06.910271 | orchestrator | 2026-02-28 01:17:06 | INFO  | Task fd7ad46f-1ac3-4fcb-b16b-dca0c60c95c3 is in state STARTED 2026-02-28 01:17:06.910386 | orchestrator | 2026-02-28 01:17:06 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:17:09.957523 | orchestrator | 2026-02-28 01:17:09 | INFO  | Task fd7ad46f-1ac3-4fcb-b16b-dca0c60c95c3 is in state STARTED 2026-02-28 01:17:09.957627 | orchestrator | 2026-02-28 01:17:09 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:17:13.006118 | orchestrator | 2026-02-28 01:17:13 | INFO  | Task fd7ad46f-1ac3-4fcb-b16b-dca0c60c95c3 is in state STARTED 2026-02-28 01:17:13.006219 | orchestrator | 2026-02-28 01:17:13 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:17:16.044979 | orchestrator | 2026-02-28 01:17:16 | INFO  | Task fd7ad46f-1ac3-4fcb-b16b-dca0c60c95c3 is in state STARTED 2026-02-28 01:17:16.045068 | orchestrator | 2026-02-28 01:17:16 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:17:19.074372 | orchestrator | 2026-02-28 01:17:19 | INFO  | Task fd7ad46f-1ac3-4fcb-b16b-dca0c60c95c3 is in state STARTED 2026-02-28 01:17:19.074512 | orchestrator | 2026-02-28 01:17:19 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:17:22.111115 | orchestrator | 2026-02-28 01:17:22 | INFO  | Task fd7ad46f-1ac3-4fcb-b16b-dca0c60c95c3 is in state STARTED 2026-02-28 01:17:22.111225 | orchestrator | 2026-02-28 01:17:22 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:17:25.161865 | orchestrator | 2026-02-28 01:17:25 | INFO  | Task fd7ad46f-1ac3-4fcb-b16b-dca0c60c95c3 is in state STARTED 2026-02-28 01:17:25.161960 | orchestrator | 2026-02-28 01:17:25 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:17:28.199739 | orchestrator | 2026-02-28 01:17:28 | INFO  | Task fd7ad46f-1ac3-4fcb-b16b-dca0c60c95c3 is in state STARTED 2026-02-28 01:17:28.199858 | orchestrator | 2026-02-28 01:17:28 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:17:31.235728 | orchestrator | 2026-02-28 01:17:31 | INFO  | Task fd7ad46f-1ac3-4fcb-b16b-dca0c60c95c3 is in state STARTED 2026-02-28 01:17:31.235821 | orchestrator | 2026-02-28 01:17:31 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:17:34.281687 | orchestrator | 2026-02-28 01:17:34 | INFO  | Task fd7ad46f-1ac3-4fcb-b16b-dca0c60c95c3 is in state STARTED 2026-02-28 01:17:34.281778 | orchestrator | 2026-02-28 01:17:34 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:17:37.321921 | orchestrator | 2026-02-28 01:17:37 | INFO  | Task fd7ad46f-1ac3-4fcb-b16b-dca0c60c95c3 is in state STARTED 2026-02-28 01:17:37.322059 | orchestrator | 2026-02-28 01:17:37 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:17:40.365410 | orchestrator | 2026-02-28 01:17:40 | INFO  | Task fd7ad46f-1ac3-4fcb-b16b-dca0c60c95c3 is in state STARTED 2026-02-28 01:17:40.365512 | orchestrator | 2026-02-28 01:17:40 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:17:43.411750 | orchestrator | 2026-02-28 01:17:43 | INFO  | Task fd7ad46f-1ac3-4fcb-b16b-dca0c60c95c3 is in state STARTED 2026-02-28 01:17:43.411837 | orchestrator | 2026-02-28 01:17:43 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:17:46.454435 | orchestrator | 2026-02-28 01:17:46 | INFO  | Task fd7ad46f-1ac3-4fcb-b16b-dca0c60c95c3 is in state STARTED 2026-02-28 01:17:46.454582 | orchestrator | 2026-02-28 01:17:46 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:17:49.496304 | orchestrator | 2026-02-28 01:17:49 | INFO  | Task fd7ad46f-1ac3-4fcb-b16b-dca0c60c95c3 is in state SUCCESS 2026-02-28 01:17:49.496410 | orchestrator | 2026-02-28 01:17:49 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-02-28 01:17:49.498089 | orchestrator | 2026-02-28 01:17:49.498120 | orchestrator | 2026-02-28 01:17:49.498125 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-28 01:17:49.498130 | orchestrator | 2026-02-28 01:17:49.498134 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-28 01:17:49.498139 | orchestrator | Saturday 28 February 2026 01:12:30 +0000 (0:00:00.365) 0:00:00.365 ***** 2026-02-28 01:17:49.498144 | orchestrator | ok: [testbed-node-0] 2026-02-28 01:17:49.498149 | orchestrator | ok: [testbed-node-1] 2026-02-28 01:17:49.498154 | orchestrator | ok: [testbed-node-2] 2026-02-28 01:17:49.498158 | orchestrator | 2026-02-28 01:17:49.498162 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-28 01:17:49.498166 | orchestrator | Saturday 28 February 2026 01:12:30 +0000 (0:00:00.263) 0:00:00.628 ***** 2026-02-28 01:17:49.498170 | orchestrator | ok: [testbed-node-0] => (item=enable_octavia_True) 2026-02-28 01:17:49.498174 | orchestrator | ok: [testbed-node-1] => (item=enable_octavia_True) 2026-02-28 01:17:49.498178 | orchestrator | ok: [testbed-node-2] => (item=enable_octavia_True) 2026-02-28 01:17:49.498182 | orchestrator | 2026-02-28 01:17:49.498186 | orchestrator | PLAY [Apply role octavia] ****************************************************** 2026-02-28 01:17:49.498189 | orchestrator | 2026-02-28 01:17:49.498193 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-02-28 01:17:49.498197 | orchestrator | Saturday 28 February 2026 01:12:31 +0000 (0:00:00.373) 0:00:01.002 ***** 2026-02-28 01:17:49.498201 | orchestrator | included: /ansible/roles/octavia/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 01:17:49.498206 | orchestrator | 2026-02-28 01:17:49.498210 | orchestrator | TASK [service-ks-register : octavia | Creating/deleting services] ************** 2026-02-28 01:17:49.498214 | orchestrator | Saturday 28 February 2026 01:12:32 +0000 (0:00:00.724) 0:00:01.727 ***** 2026-02-28 01:17:49.498219 | orchestrator | changed: [testbed-node-0] => (item=octavia (load-balancer)) 2026-02-28 01:17:49.498222 | orchestrator | 2026-02-28 01:17:49.498226 | orchestrator | TASK [service-ks-register : octavia | Creating/deleting endpoints] ************* 2026-02-28 01:17:49.498230 | orchestrator | Saturday 28 February 2026 01:12:36 +0000 (0:00:04.131) 0:00:05.858 ***** 2026-02-28 01:17:49.498234 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api-int.testbed.osism.xyz:9876 -> internal) 2026-02-28 01:17:49.498238 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api.testbed.osism.xyz:9876 -> public) 2026-02-28 01:17:49.498242 | orchestrator | 2026-02-28 01:17:49.498246 | orchestrator | TASK [service-ks-register : octavia | Creating projects] *********************** 2026-02-28 01:17:49.498249 | orchestrator | Saturday 28 February 2026 01:12:43 +0000 (0:00:07.596) 0:00:13.455 ***** 2026-02-28 01:17:49.498253 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-28 01:17:49.498258 | orchestrator | 2026-02-28 01:17:49.498261 | orchestrator | TASK [service-ks-register : octavia | Creating users] ************************** 2026-02-28 01:17:49.498265 | orchestrator | Saturday 28 February 2026 01:12:47 +0000 (0:00:03.777) 0:00:17.232 ***** 2026-02-28 01:17:49.498269 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2026-02-28 01:17:49.498273 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2026-02-28 01:17:49.498277 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-28 01:17:49.498281 | orchestrator | 2026-02-28 01:17:49.498285 | orchestrator | TASK [service-ks-register : octavia | Creating roles] ************************** 2026-02-28 01:17:49.498289 | orchestrator | Saturday 28 February 2026 01:12:56 +0000 (0:00:09.215) 0:00:26.448 ***** 2026-02-28 01:17:49.498345 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-28 01:17:49.498351 | orchestrator | 2026-02-28 01:17:49.498355 | orchestrator | TASK [service-ks-register : octavia | Granting/revoking user roles] ************ 2026-02-28 01:17:49.498393 | orchestrator | Saturday 28 February 2026 01:13:00 +0000 (0:00:03.765) 0:00:30.214 ***** 2026-02-28 01:17:49.498397 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service -> admin) 2026-02-28 01:17:49.498411 | orchestrator | ok: [testbed-node-0] => (item=octavia -> service -> admin) 2026-02-28 01:17:49.498415 | orchestrator | 2026-02-28 01:17:49.498419 | orchestrator | TASK [octavia : Adding octavia related roles] ********************************** 2026-02-28 01:17:49.498423 | orchestrator | Saturday 28 February 2026 01:13:09 +0000 (0:00:09.016) 0:00:39.230 ***** 2026-02-28 01:17:49.498427 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_observer) 2026-02-28 01:17:49.498430 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_global_observer) 2026-02-28 01:17:49.498434 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_member) 2026-02-28 01:17:49.498438 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_admin) 2026-02-28 01:17:49.498442 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_quota_admin) 2026-02-28 01:17:49.498446 | orchestrator | 2026-02-28 01:17:49.498449 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-02-28 01:17:49.498453 | orchestrator | Saturday 28 February 2026 01:13:27 +0000 (0:00:18.293) 0:00:57.523 ***** 2026-02-28 01:17:49.498457 | orchestrator | included: /ansible/roles/octavia/tasks/prepare.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 01:17:49.498461 | orchestrator | 2026-02-28 01:17:49.498465 | orchestrator | TASK [octavia : Create amphora flavor] ***************************************** 2026-02-28 01:17:49.498468 | orchestrator | Saturday 28 February 2026 01:13:28 +0000 (0:00:00.708) 0:00:58.232 ***** 2026-02-28 01:17:49.498472 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:17:49.498476 | orchestrator | 2026-02-28 01:17:49.498480 | orchestrator | TASK [octavia : Create nova keypair for amphora] ******************************* 2026-02-28 01:17:49.498484 | orchestrator | Saturday 28 February 2026 01:13:34 +0000 (0:00:06.346) 0:01:04.579 ***** 2026-02-28 01:17:49.498642 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:17:49.498646 | orchestrator | 2026-02-28 01:17:49.498650 | orchestrator | TASK [octavia : Get service project id] **************************************** 2026-02-28 01:17:49.498662 | orchestrator | Saturday 28 February 2026 01:13:40 +0000 (0:00:05.462) 0:01:10.042 ***** 2026-02-28 01:17:49.498666 | orchestrator | ok: [testbed-node-0] 2026-02-28 01:17:49.498671 | orchestrator | 2026-02-28 01:17:49.498674 | orchestrator | TASK [octavia : Create security groups for octavia] **************************** 2026-02-28 01:17:49.498678 | orchestrator | Saturday 28 February 2026 01:13:44 +0000 (0:00:03.776) 0:01:13.818 ***** 2026-02-28 01:17:49.498682 | orchestrator | changed: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2026-02-28 01:17:49.498686 | orchestrator | changed: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2026-02-28 01:17:49.498690 | orchestrator | 2026-02-28 01:17:49.498694 | orchestrator | TASK [octavia : Add rules for security groups] ********************************* 2026-02-28 01:17:49.498698 | orchestrator | Saturday 28 February 2026 01:13:56 +0000 (0:00:12.412) 0:01:26.230 ***** 2026-02-28 01:17:49.498702 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'icmp'}]) 2026-02-28 01:17:49.498738 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': 22, 'dst_port': 22}]) 2026-02-28 01:17:49.498744 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': '9443', 'dst_port': '9443'}]) 2026-02-28 01:17:49.498749 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-health-mgr-sec-grp', 'enabled': True}, {'protocol': 'udp', 'src_port': '5555', 'dst_port': '5555'}]) 2026-02-28 01:17:49.498753 | orchestrator | 2026-02-28 01:17:49.498757 | orchestrator | TASK [octavia : Create loadbalancer management network] ************************ 2026-02-28 01:17:49.498761 | orchestrator | Saturday 28 February 2026 01:14:13 +0000 (0:00:16.497) 0:01:42.728 ***** 2026-02-28 01:17:49.498772 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:17:49.498776 | orchestrator | 2026-02-28 01:17:49.498780 | orchestrator | TASK [octavia : Create loadbalancer management subnet] ************************* 2026-02-28 01:17:49.498783 | orchestrator | Saturday 28 February 2026 01:14:18 +0000 (0:00:05.009) 0:01:47.738 ***** 2026-02-28 01:17:49.498787 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:17:49.498791 | orchestrator | 2026-02-28 01:17:49.498795 | orchestrator | TASK [octavia : Create loadbalancer management router for IPv6] **************** 2026-02-28 01:17:49.498799 | orchestrator | Saturday 28 February 2026 01:14:23 +0000 (0:00:05.822) 0:01:53.560 ***** 2026-02-28 01:17:49.498803 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:17:49.498806 | orchestrator | 2026-02-28 01:17:49.498810 | orchestrator | TASK [octavia : Update loadbalancer management subnet] ************************* 2026-02-28 01:17:49.498814 | orchestrator | Saturday 28 February 2026 01:14:24 +0000 (0:00:00.288) 0:01:53.849 ***** 2026-02-28 01:17:49.498818 | orchestrator | ok: [testbed-node-0] 2026-02-28 01:17:49.498822 | orchestrator | 2026-02-28 01:17:49.498826 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-02-28 01:17:49.498830 | orchestrator | Saturday 28 February 2026 01:14:28 +0000 (0:00:04.634) 0:01:58.483 ***** 2026-02-28 01:17:49.498833 | orchestrator | included: /ansible/roles/octavia/tasks/hm-interface.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 01:17:49.498838 | orchestrator | 2026-02-28 01:17:49.498841 | orchestrator | TASK [octavia : Create ports for Octavia health-manager nodes] ***************** 2026-02-28 01:17:49.498845 | orchestrator | Saturday 28 February 2026 01:14:30 +0000 (0:00:01.256) 0:01:59.740 ***** 2026-02-28 01:17:49.498849 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:17:49.498853 | orchestrator | changed: [testbed-node-1] 2026-02-28 01:17:49.498857 | orchestrator | changed: [testbed-node-2] 2026-02-28 01:17:49.498861 | orchestrator | 2026-02-28 01:17:49.498865 | orchestrator | TASK [octavia : Update Octavia health manager port host_id] ******************** 2026-02-28 01:17:49.498869 | orchestrator | Saturday 28 February 2026 01:14:35 +0000 (0:00:05.823) 0:02:05.563 ***** 2026-02-28 01:17:49.498872 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:17:49.498876 | orchestrator | changed: [testbed-node-2] 2026-02-28 01:17:49.498884 | orchestrator | changed: [testbed-node-1] 2026-02-28 01:17:49.498920 | orchestrator | 2026-02-28 01:17:49.498924 | orchestrator | TASK [octavia : Add Octavia port to openvswitch br-int] ************************ 2026-02-28 01:17:49.498928 | orchestrator | Saturday 28 February 2026 01:14:41 +0000 (0:00:05.102) 0:02:10.666 ***** 2026-02-28 01:17:49.498932 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:17:49.498936 | orchestrator | changed: [testbed-node-1] 2026-02-28 01:17:49.498940 | orchestrator | changed: [testbed-node-2] 2026-02-28 01:17:49.498944 | orchestrator | 2026-02-28 01:17:49.498948 | orchestrator | TASK [octavia : Install isc-dhcp-client package] ******************************* 2026-02-28 01:17:49.498952 | orchestrator | Saturday 28 February 2026 01:14:41 +0000 (0:00:00.884) 0:02:11.551 ***** 2026-02-28 01:17:49.498956 | orchestrator | ok: [testbed-node-1] 2026-02-28 01:17:49.498960 | orchestrator | ok: [testbed-node-0] 2026-02-28 01:17:49.499444 | orchestrator | ok: [testbed-node-2] 2026-02-28 01:17:49.499453 | orchestrator | 2026-02-28 01:17:49.499458 | orchestrator | TASK [octavia : Create octavia dhclient conf] ********************************** 2026-02-28 01:17:49.499462 | orchestrator | Saturday 28 February 2026 01:14:44 +0000 (0:00:02.288) 0:02:13.840 ***** 2026-02-28 01:17:49.499466 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:17:49.499471 | orchestrator | changed: [testbed-node-2] 2026-02-28 01:17:49.499475 | orchestrator | changed: [testbed-node-1] 2026-02-28 01:17:49.499479 | orchestrator | 2026-02-28 01:17:49.499483 | orchestrator | TASK [octavia : Create octavia-interface service] ****************************** 2026-02-28 01:17:49.499487 | orchestrator | Saturday 28 February 2026 01:14:45 +0000 (0:00:01.532) 0:02:15.372 ***** 2026-02-28 01:17:49.499491 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:17:49.499495 | orchestrator | changed: [testbed-node-1] 2026-02-28 01:17:49.499499 | orchestrator | changed: [testbed-node-2] 2026-02-28 01:17:49.499509 | orchestrator | 2026-02-28 01:17:49.499513 | orchestrator | TASK [octavia : Restart octavia-interface.service if required] ***************** 2026-02-28 01:17:49.499517 | orchestrator | Saturday 28 February 2026 01:14:47 +0000 (0:00:01.605) 0:02:16.977 ***** 2026-02-28 01:17:49.499521 | orchestrator | changed: [testbed-node-2] 2026-02-28 01:17:49.499525 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:17:49.499529 | orchestrator | changed: [testbed-node-1] 2026-02-28 01:17:49.499533 | orchestrator | 2026-02-28 01:17:49.499552 | orchestrator | TASK [octavia : Enable and start octavia-interface.service] ******************** 2026-02-28 01:17:49.499557 | orchestrator | Saturday 28 February 2026 01:14:50 +0000 (0:00:03.148) 0:02:20.126 ***** 2026-02-28 01:17:49.499560 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:17:49.499564 | orchestrator | changed: [testbed-node-2] 2026-02-28 01:17:49.499568 | orchestrator | changed: [testbed-node-1] 2026-02-28 01:17:49.499572 | orchestrator | 2026-02-28 01:17:49.499576 | orchestrator | TASK [octavia : Wait for interface ohm0 ip appear] ***************************** 2026-02-28 01:17:49.499580 | orchestrator | Saturday 28 February 2026 01:14:53 +0000 (0:00:02.739) 0:02:22.866 ***** 2026-02-28 01:17:49.499583 | orchestrator | ok: [testbed-node-0] 2026-02-28 01:17:49.499587 | orchestrator | ok: [testbed-node-1] 2026-02-28 01:17:49.499591 | orchestrator | ok: [testbed-node-2] 2026-02-28 01:17:49.499595 | orchestrator | 2026-02-28 01:17:49.499599 | orchestrator | TASK [octavia : Gather facts] ************************************************** 2026-02-28 01:17:49.499603 | orchestrator | Saturday 28 February 2026 01:14:53 +0000 (0:00:00.700) 0:02:23.566 ***** 2026-02-28 01:17:49.499606 | orchestrator | ok: [testbed-node-0] 2026-02-28 01:17:49.499610 | orchestrator | ok: [testbed-node-2] 2026-02-28 01:17:49.499614 | orchestrator | ok: [testbed-node-1] 2026-02-28 01:17:49.499618 | orchestrator | 2026-02-28 01:17:49.499622 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-02-28 01:17:49.499626 | orchestrator | Saturday 28 February 2026 01:14:57 +0000 (0:00:04.043) 0:02:27.610 ***** 2026-02-28 01:17:49.499630 | orchestrator | included: /ansible/roles/octavia/tasks/get_resources_info.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 01:17:49.499634 | orchestrator | 2026-02-28 01:17:49.499638 | orchestrator | TASK [octavia : Get amphora flavor info] *************************************** 2026-02-28 01:17:49.499641 | orchestrator | Saturday 28 February 2026 01:14:58 +0000 (0:00:00.802) 0:02:28.412 ***** 2026-02-28 01:17:49.499645 | orchestrator | ok: [testbed-node-0] 2026-02-28 01:17:49.499649 | orchestrator | 2026-02-28 01:17:49.499653 | orchestrator | TASK [octavia : Get service project id] **************************************** 2026-02-28 01:17:49.499657 | orchestrator | Saturday 28 February 2026 01:15:03 +0000 (0:00:04.341) 0:02:32.753 ***** 2026-02-28 01:17:49.499668 | orchestrator | ok: [testbed-node-0] 2026-02-28 01:17:49.499672 | orchestrator | 2026-02-28 01:17:49.499675 | orchestrator | TASK [octavia : Get security groups for octavia] ******************************* 2026-02-28 01:17:49.499684 | orchestrator | Saturday 28 February 2026 01:15:06 +0000 (0:00:03.451) 0:02:36.205 ***** 2026-02-28 01:17:49.499688 | orchestrator | ok: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2026-02-28 01:17:49.499692 | orchestrator | ok: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2026-02-28 01:17:49.499696 | orchestrator | 2026-02-28 01:17:49.499700 | orchestrator | TASK [octavia : Get loadbalancer management network] *************************** 2026-02-28 01:17:49.499704 | orchestrator | Saturday 28 February 2026 01:15:14 +0000 (0:00:07.561) 0:02:43.767 ***** 2026-02-28 01:17:49.499707 | orchestrator | ok: [testbed-node-0] 2026-02-28 01:17:49.499711 | orchestrator | 2026-02-28 01:17:49.499715 | orchestrator | TASK [octavia : Set octavia resources facts] *********************************** 2026-02-28 01:17:49.499719 | orchestrator | Saturday 28 February 2026 01:15:17 +0000 (0:00:03.804) 0:02:47.572 ***** 2026-02-28 01:17:49.499723 | orchestrator | ok: [testbed-node-0] 2026-02-28 01:17:49.499727 | orchestrator | ok: [testbed-node-1] 2026-02-28 01:17:49.499731 | orchestrator | ok: [testbed-node-2] 2026-02-28 01:17:49.499735 | orchestrator | 2026-02-28 01:17:49.499739 | orchestrator | TASK [octavia : Ensuring config directories exist] ***************************** 2026-02-28 01:17:49.499746 | orchestrator | Saturday 28 February 2026 01:15:18 +0000 (0:00:00.595) 0:02:48.167 ***** 2026-02-28 01:17:49.499755 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-28 01:17:49.499774 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-28 01:17:49.499779 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-28 01:17:49.499784 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-28 01:17:49.499788 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-28 01:17:49.499799 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-28 01:17:49.499804 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-28 01:17:49.499809 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-28 01:17:49.499824 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-28 01:17:49.499828 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-28 01:17:49.499833 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-28 01:17:49.499837 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-28 01:17:49.499849 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-28 01:17:49.499853 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-28 01:17:49.499866 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-28 01:17:49.499870 | orchestrator | 2026-02-28 01:17:49.499874 | orchestrator | TASK [octavia : Check if policies shall be overwritten] ************************ 2026-02-28 01:17:49.499878 | orchestrator | Saturday 28 February 2026 01:15:21 +0000 (0:00:03.058) 0:02:51.225 ***** 2026-02-28 01:17:49.499882 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:17:49.499886 | orchestrator | 2026-02-28 01:17:49.499890 | orchestrator | TASK [octavia : Set octavia policy file] *************************************** 2026-02-28 01:17:49.499893 | orchestrator | Saturday 28 February 2026 01:15:21 +0000 (0:00:00.212) 0:02:51.438 ***** 2026-02-28 01:17:49.499897 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:17:49.499901 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:17:49.499905 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:17:49.499909 | orchestrator | 2026-02-28 01:17:49.499912 | orchestrator | TASK [octavia : Copying over existing policy file] ***************************** 2026-02-28 01:17:49.499916 | orchestrator | Saturday 28 February 2026 01:15:22 +0000 (0:00:01.126) 0:02:52.564 ***** 2026-02-28 01:17:49.499921 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-28 01:17:49.499928 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-28 01:17:49.499935 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-28 01:17:49.499940 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-28 01:17:49.499954 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-28 01:17:49.499958 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-28 01:17:49.499962 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-28 01:17:49.499970 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-28 01:17:49.499974 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-28 01:17:49.499980 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-28 01:17:49.499984 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:17:49.499988 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:17:49.500002 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-28 01:17:49.500006 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-28 01:17:49.500010 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-28 01:17:49.500017 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-28 01:17:49.500021 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-28 01:17:49.500025 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:17:49.500029 | orchestrator | 2026-02-28 01:17:49.500035 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-02-28 01:17:49.500039 | orchestrator | Saturday 28 February 2026 01:15:24 +0000 (0:00:01.726) 0:02:54.291 ***** 2026-02-28 01:17:49.500043 | orchestrator | included: /ansible/roles/octavia/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 01:17:49.500047 | orchestrator | 2026-02-28 01:17:49.500051 | orchestrator | TASK [service-cert-copy : octavia | Copying over extra CA certificates] ******** 2026-02-28 01:17:49.500055 | orchestrator | Saturday 28 February 2026 01:15:25 +0000 (0:00:00.829) 0:02:55.120 ***** 2026-02-28 01:17:49.500059 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-28 01:17:49.500073 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-28 01:17:49.500080 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-28 01:17:49.500085 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-28 01:17:49.500091 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-28 01:17:49.500096 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-28 01:17:49.500100 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-28 01:17:49.500112 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-28 01:17:49.500120 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-28 01:17:49.500124 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-28 01:17:49.500128 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-28 01:17:49.500135 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-28 01:17:49.500139 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-28 01:17:49.500152 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-28 01:17:49.500157 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-28 01:17:49.500164 | orchestrator | 2026-02-28 01:17:49.500168 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS certificate] *** 2026-02-28 01:17:49.500172 | orchestrator | Saturday 28 February 2026 01:15:31 +0000 (0:00:05.560) 0:03:00.681 ***** 2026-02-28 01:17:49.500176 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-28 01:17:49.500180 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-28 01:17:49.500187 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-28 01:17:49.500191 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-28 01:17:49.500204 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-28 01:17:49.500211 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:17:49.500215 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-28 01:17:49.500219 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-28 01:17:49.500223 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-28 01:17:49.500230 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-28 01:17:49.500234 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-28 01:17:49.500238 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:17:49.500244 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-28 01:17:49.500254 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-28 01:17:49.500258 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-28 01:17:49.500262 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-28 01:17:49.500268 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-28 01:17:49.500272 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:17:49.500276 | orchestrator | 2026-02-28 01:17:49.500280 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS key] ***** 2026-02-28 01:17:49.500284 | orchestrator | Saturday 28 February 2026 01:15:32 +0000 (0:00:01.088) 0:03:01.770 ***** 2026-02-28 01:17:49.500288 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-28 01:17:49.500300 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-28 01:17:49.500304 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-28 01:17:49.500308 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-28 01:17:49.500312 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-28 01:17:49.500316 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:17:49.500323 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-28 01:17:49.500327 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-28 01:17:49.500338 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-28 01:17:49.500342 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-28 01:17:49.500346 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-28 01:17:49.500350 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:17:49.500354 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-28 01:17:49.500371 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-28 01:17:49.500376 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-28 01:17:49.500388 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-28 01:17:49.500392 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-28 01:17:49.500396 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:17:49.500400 | orchestrator | 2026-02-28 01:17:49.500404 | orchestrator | TASK [octavia : Copying over config.json files for services] ******************* 2026-02-28 01:17:49.500408 | orchestrator | Saturday 28 February 2026 01:15:33 +0000 (0:00:01.019) 0:03:02.789 ***** 2026-02-28 01:17:49.500412 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-28 01:17:49.500419 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-28 01:17:49.500423 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-28 01:17:49.500431 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-28 01:17:49.500436 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-28 01:17:49.500440 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-28 01:17:49.500444 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-28 01:17:49.500448 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-28 01:17:49.500454 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-28 01:17:49.500461 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-28 01:17:49.500468 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-28 01:17:49.500472 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-28 01:17:49.500476 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-28 01:17:49.500480 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-28 01:17:49.500486 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-28 01:17:49.500493 | orchestrator | 2026-02-28 01:17:49.500497 | orchestrator | TASK [octavia : Copying over octavia-wsgi.conf] ******************************** 2026-02-28 01:17:49.500501 | orchestrator | Saturday 28 February 2026 01:15:38 +0000 (0:00:05.484) 0:03:08.274 ***** 2026-02-28 01:17:49.500505 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-02-28 01:17:49.500509 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-02-28 01:17:49.500513 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-02-28 01:17:49.500517 | orchestrator | 2026-02-28 01:17:49.500521 | orchestrator | TASK [octavia : Copying over octavia.conf] ************************************* 2026-02-28 01:17:49.500525 | orchestrator | Saturday 28 February 2026 01:15:40 +0000 (0:00:02.028) 0:03:10.303 ***** 2026-02-28 01:17:49.500532 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-28 01:17:49.500536 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-28 01:17:49.500540 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-28 01:17:49.500547 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-28 01:17:49.500555 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-28 01:17:49.500559 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-28 01:17:49.500565 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-28 01:17:49.500569 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-28 01:17:49.500573 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-28 01:17:49.500577 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-28 01:17:49.500587 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-28 01:17:49.500592 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-28 01:17:49.500596 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-28 01:17:49.500603 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-28 01:17:49.500607 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-28 01:17:49.500611 | orchestrator | 2026-02-28 01:17:49.500615 | orchestrator | TASK [octavia : Copying over Octavia SSH key] ********************************** 2026-02-28 01:17:49.500619 | orchestrator | Saturday 28 February 2026 01:15:59 +0000 (0:00:18.509) 0:03:28.812 ***** 2026-02-28 01:17:49.500623 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:17:49.500626 | orchestrator | changed: [testbed-node-2] 2026-02-28 01:17:49.500630 | orchestrator | changed: [testbed-node-1] 2026-02-28 01:17:49.500634 | orchestrator | 2026-02-28 01:17:49.500638 | orchestrator | TASK [octavia : Copying certificate files for octavia-worker] ****************** 2026-02-28 01:17:49.500642 | orchestrator | Saturday 28 February 2026 01:16:00 +0000 (0:00:01.641) 0:03:30.454 ***** 2026-02-28 01:17:49.500646 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-02-28 01:17:49.500661 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-02-28 01:17:49.500665 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-02-28 01:17:49.500668 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-02-28 01:17:49.500677 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-02-28 01:17:49.500681 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-02-28 01:17:49.500685 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-02-28 01:17:49.500689 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-02-28 01:17:49.500693 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-02-28 01:17:49.500696 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-02-28 01:17:49.500700 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-02-28 01:17:49.500704 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-02-28 01:17:49.500708 | orchestrator | 2026-02-28 01:17:49.500711 | orchestrator | TASK [octavia : Copying certificate files for octavia-housekeeping] ************ 2026-02-28 01:17:49.500715 | orchestrator | Saturday 28 February 2026 01:16:06 +0000 (0:00:05.567) 0:03:36.021 ***** 2026-02-28 01:17:49.500719 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-02-28 01:17:49.500725 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-02-28 01:17:49.500729 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-02-28 01:17:49.500733 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-02-28 01:17:49.500737 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-02-28 01:17:49.500741 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-02-28 01:17:49.500745 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-02-28 01:17:49.500748 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-02-28 01:17:49.500752 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-02-28 01:17:49.500756 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-02-28 01:17:49.500760 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-02-28 01:17:49.500763 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-02-28 01:17:49.500767 | orchestrator | 2026-02-28 01:17:49.500771 | orchestrator | TASK [octavia : Copying certificate files for octavia-health-manager] ********** 2026-02-28 01:17:49.500775 | orchestrator | Saturday 28 February 2026 01:16:12 +0000 (0:00:05.920) 0:03:41.942 ***** 2026-02-28 01:17:49.500779 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-02-28 01:17:49.500783 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-02-28 01:17:49.500786 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-02-28 01:17:49.500790 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-02-28 01:17:49.500794 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-02-28 01:17:49.500798 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-02-28 01:17:49.500802 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-02-28 01:17:49.500805 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-02-28 01:17:49.500811 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-02-28 01:17:49.500815 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-02-28 01:17:49.500819 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-02-28 01:17:49.500823 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-02-28 01:17:49.500826 | orchestrator | 2026-02-28 01:17:49.500830 | orchestrator | TASK [service-check-containers : octavia | Check containers] ******************* 2026-02-28 01:17:49.500834 | orchestrator | Saturday 28 February 2026 01:16:17 +0000 (0:00:05.308) 0:03:47.251 ***** 2026-02-28 01:17:49.500841 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-28 01:17:49.500845 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-28 01:17:49.500852 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-28 01:17:49.500856 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-28 01:17:49.500862 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-28 01:17:49.500869 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-28 01:17:49.500873 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-28 01:17:49.500877 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-28 01:17:49.500881 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-28 01:17:49.500887 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-28 01:17:49.500891 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-28 01:17:49.500897 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-28 01:17:49.500904 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-28 01:17:49.500908 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-28 01:17:49.500912 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-28 01:17:49.500916 | orchestrator | 2026-02-28 01:17:49.500920 | orchestrator | TASK [service-check-containers : octavia | Notify handlers to restart containers] *** 2026-02-28 01:17:49.500924 | orchestrator | Saturday 28 February 2026 01:16:21 +0000 (0:00:04.345) 0:03:51.597 ***** 2026-02-28 01:17:49.500928 | orchestrator | changed: [testbed-node-0] => { 2026-02-28 01:17:49.500932 | orchestrator |  "msg": "Notifying handlers" 2026-02-28 01:17:49.500936 | orchestrator | } 2026-02-28 01:17:49.500940 | orchestrator | changed: [testbed-node-1] => { 2026-02-28 01:17:49.500944 | orchestrator |  "msg": "Notifying handlers" 2026-02-28 01:17:49.500948 | orchestrator | } 2026-02-28 01:17:49.500951 | orchestrator | changed: [testbed-node-2] => { 2026-02-28 01:17:49.500955 | orchestrator |  "msg": "Notifying handlers" 2026-02-28 01:17:49.500959 | orchestrator | } 2026-02-28 01:17:49.500963 | orchestrator | 2026-02-28 01:17:49.500969 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-28 01:17:49.500973 | orchestrator | Saturday 28 February 2026 01:16:22 +0000 (0:00:00.381) 0:03:51.979 ***** 2026-02-28 01:17:49.500977 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-28 01:17:49.500988 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-28 01:17:49.500992 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-28 01:17:49.500996 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-28 01:17:49.501000 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-28 01:17:49.501004 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:17:49.501011 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-28 01:17:49.501015 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-28 01:17:49.501027 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-28 01:17:49.501031 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-28 01:17:49.501035 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-28 01:17:49.501039 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:17:49.501043 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-28 01:17:49.501050 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-28 01:17:49.501054 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-28 01:17:49.501064 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-28 01:17:49.501069 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-28 01:17:49.501073 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:17:49.501076 | orchestrator | 2026-02-28 01:17:49.501080 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-02-28 01:17:49.501084 | orchestrator | Saturday 28 February 2026 01:16:23 +0000 (0:00:01.579) 0:03:53.558 ***** 2026-02-28 01:17:49.501088 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:17:49.501092 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:17:49.501096 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:17:49.501100 | orchestrator | 2026-02-28 01:17:49.501103 | orchestrator | TASK [octavia : Creating Octavia database] ************************************* 2026-02-28 01:17:49.501107 | orchestrator | Saturday 28 February 2026 01:16:24 +0000 (0:00:00.362) 0:03:53.920 ***** 2026-02-28 01:17:49.501111 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:17:49.501115 | orchestrator | 2026-02-28 01:17:49.501119 | orchestrator | TASK [octavia : Creating Octavia persistence database] ************************* 2026-02-28 01:17:49.501122 | orchestrator | Saturday 28 February 2026 01:16:26 +0000 (0:00:02.415) 0:03:56.336 ***** 2026-02-28 01:17:49.501126 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:17:49.501130 | orchestrator | 2026-02-28 01:17:49.501134 | orchestrator | TASK [octavia : Creating Octavia database user and setting permissions] ******** 2026-02-28 01:17:49.501138 | orchestrator | Saturday 28 February 2026 01:16:29 +0000 (0:00:02.563) 0:03:58.900 ***** 2026-02-28 01:17:49.501142 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:17:49.501145 | orchestrator | 2026-02-28 01:17:49.501149 | orchestrator | TASK [octavia : Creating Octavia persistence database user and setting permissions] *** 2026-02-28 01:17:49.501153 | orchestrator | Saturday 28 February 2026 01:16:31 +0000 (0:00:02.457) 0:04:01.358 ***** 2026-02-28 01:17:49.501157 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:17:49.501161 | orchestrator | 2026-02-28 01:17:49.501164 | orchestrator | TASK [octavia : Running Octavia bootstrap container] *************************** 2026-02-28 01:17:49.501168 | orchestrator | Saturday 28 February 2026 01:16:34 +0000 (0:00:02.323) 0:04:03.681 ***** 2026-02-28 01:17:49.501172 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:17:49.501176 | orchestrator | 2026-02-28 01:17:49.501180 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-02-28 01:17:49.501184 | orchestrator | Saturday 28 February 2026 01:16:59 +0000 (0:00:25.367) 0:04:29.048 ***** 2026-02-28 01:17:49.501187 | orchestrator | 2026-02-28 01:17:49.501191 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-02-28 01:17:49.501199 | orchestrator | Saturday 28 February 2026 01:16:59 +0000 (0:00:00.069) 0:04:29.118 ***** 2026-02-28 01:17:49.501203 | orchestrator | 2026-02-28 01:17:49.501207 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-02-28 01:17:49.501211 | orchestrator | Saturday 28 February 2026 01:16:59 +0000 (0:00:00.074) 0:04:29.193 ***** 2026-02-28 01:17:49.501214 | orchestrator | 2026-02-28 01:17:49.501218 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-api container] ********************** 2026-02-28 01:17:49.501222 | orchestrator | Saturday 28 February 2026 01:16:59 +0000 (0:00:00.304) 0:04:29.497 ***** 2026-02-28 01:17:49.501226 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:17:49.501233 | orchestrator | changed: [testbed-node-1] 2026-02-28 01:17:49.501237 | orchestrator | changed: [testbed-node-2] 2026-02-28 01:17:49.501240 | orchestrator | 2026-02-28 01:17:49.501244 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-driver-agent container] ************* 2026-02-28 01:17:49.501248 | orchestrator | Saturday 28 February 2026 01:17:12 +0000 (0:00:12.864) 0:04:42.362 ***** 2026-02-28 01:17:49.501252 | orchestrator | changed: [testbed-node-1] 2026-02-28 01:17:49.501256 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:17:49.501260 | orchestrator | changed: [testbed-node-2] 2026-02-28 01:17:49.501263 | orchestrator | 2026-02-28 01:17:49.501267 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-health-manager container] *********** 2026-02-28 01:17:49.501271 | orchestrator | Saturday 28 February 2026 01:17:24 +0000 (0:00:11.740) 0:04:54.103 ***** 2026-02-28 01:17:49.501275 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:17:49.501279 | orchestrator | changed: [testbed-node-1] 2026-02-28 01:17:49.501283 | orchestrator | changed: [testbed-node-2] 2026-02-28 01:17:49.501286 | orchestrator | 2026-02-28 01:17:49.501290 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-housekeeping container] ************* 2026-02-28 01:17:49.501294 | orchestrator | Saturday 28 February 2026 01:17:30 +0000 (0:00:06.324) 0:05:00.428 ***** 2026-02-28 01:17:49.501298 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:17:49.501302 | orchestrator | changed: [testbed-node-2] 2026-02-28 01:17:49.501306 | orchestrator | changed: [testbed-node-1] 2026-02-28 01:17:49.501309 | orchestrator | 2026-02-28 01:17:49.501313 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-worker container] ******************* 2026-02-28 01:17:49.501317 | orchestrator | Saturday 28 February 2026 01:17:41 +0000 (0:00:10.642) 0:05:11.071 ***** 2026-02-28 01:17:49.501321 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:17:49.501325 | orchestrator | changed: [testbed-node-1] 2026-02-28 01:17:49.501328 | orchestrator | changed: [testbed-node-2] 2026-02-28 01:17:49.501332 | orchestrator | 2026-02-28 01:17:49.501336 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-28 01:17:49.501340 | orchestrator | testbed-node-0 : ok=58  changed=39  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-02-28 01:17:49.501346 | orchestrator | testbed-node-1 : ok=34  changed=23  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-28 01:17:49.501350 | orchestrator | testbed-node-2 : ok=34  changed=23  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-28 01:17:49.501354 | orchestrator | 2026-02-28 01:17:49.501369 | orchestrator | 2026-02-28 01:17:49.501373 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-28 01:17:49.501377 | orchestrator | Saturday 28 February 2026 01:17:47 +0000 (0:00:06.490) 0:05:17.561 ***** 2026-02-28 01:17:49.501381 | orchestrator | =============================================================================== 2026-02-28 01:17:49.501384 | orchestrator | octavia : Running Octavia bootstrap container -------------------------- 25.37s 2026-02-28 01:17:49.501388 | orchestrator | octavia : Copying over octavia.conf ------------------------------------ 18.51s 2026-02-28 01:17:49.501392 | orchestrator | octavia : Adding octavia related roles --------------------------------- 18.29s 2026-02-28 01:17:49.501399 | orchestrator | octavia : Add rules for security groups -------------------------------- 16.50s 2026-02-28 01:17:49.501403 | orchestrator | octavia : Restart octavia-api container -------------------------------- 12.86s 2026-02-28 01:17:49.501407 | orchestrator | octavia : Create security groups for octavia --------------------------- 12.41s 2026-02-28 01:17:49.501411 | orchestrator | octavia : Restart octavia-driver-agent container ----------------------- 11.74s 2026-02-28 01:17:49.501415 | orchestrator | octavia : Restart octavia-housekeeping container ----------------------- 10.64s 2026-02-28 01:17:49.501418 | orchestrator | service-ks-register : octavia | Creating users -------------------------- 9.22s 2026-02-28 01:17:49.501422 | orchestrator | service-ks-register : octavia | Granting/revoking user roles ------------ 9.02s 2026-02-28 01:17:49.501426 | orchestrator | service-ks-register : octavia | Creating/deleting endpoints ------------- 7.60s 2026-02-28 01:17:49.501430 | orchestrator | octavia : Get security groups for octavia ------------------------------- 7.56s 2026-02-28 01:17:49.501433 | orchestrator | octavia : Restart octavia-worker container ------------------------------ 6.49s 2026-02-28 01:17:49.501437 | orchestrator | octavia : Create amphora flavor ----------------------------------------- 6.35s 2026-02-28 01:17:49.501441 | orchestrator | octavia : Restart octavia-health-manager container ---------------------- 6.32s 2026-02-28 01:17:49.501445 | orchestrator | octavia : Copying certificate files for octavia-housekeeping ------------ 5.92s 2026-02-28 01:17:49.501449 | orchestrator | octavia : Create ports for Octavia health-manager nodes ----------------- 5.82s 2026-02-28 01:17:49.501452 | orchestrator | octavia : Create loadbalancer management subnet ------------------------- 5.82s 2026-02-28 01:17:49.501456 | orchestrator | octavia : Copying certificate files for octavia-worker ------------------ 5.57s 2026-02-28 01:17:49.501460 | orchestrator | service-cert-copy : octavia | Copying over extra CA certificates -------- 5.56s 2026-02-28 01:17:52.553716 | orchestrator | 2026-02-28 01:17:52 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-02-28 01:17:55.597419 | orchestrator | 2026-02-28 01:17:55 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-02-28 01:17:58.640662 | orchestrator | 2026-02-28 01:17:58 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-02-28 01:18:01.680830 | orchestrator | 2026-02-28 01:18:01 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-02-28 01:18:04.724419 | orchestrator | 2026-02-28 01:18:04 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-02-28 01:18:07.767723 | orchestrator | 2026-02-28 01:18:07 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-02-28 01:18:10.810335 | orchestrator | 2026-02-28 01:18:10 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-02-28 01:18:13.863065 | orchestrator | 2026-02-28 01:18:13 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-02-28 01:18:16.903186 | orchestrator | 2026-02-28 01:18:16 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-02-28 01:18:19.943675 | orchestrator | 2026-02-28 01:18:19 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-02-28 01:18:22.987943 | orchestrator | 2026-02-28 01:18:22 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-02-28 01:18:26.038399 | orchestrator | 2026-02-28 01:18:26 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-02-28 01:18:29.073593 | orchestrator | 2026-02-28 01:18:29 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-02-28 01:18:32.116568 | orchestrator | 2026-02-28 01:18:32 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-02-28 01:18:35.156919 | orchestrator | 2026-02-28 01:18:35 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-02-28 01:18:38.201825 | orchestrator | 2026-02-28 01:18:38 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-02-28 01:18:41.238644 | orchestrator | 2026-02-28 01:18:41 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-02-28 01:18:44.271463 | orchestrator | 2026-02-28 01:18:44 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-02-28 01:18:47.314696 | orchestrator | 2026-02-28 01:18:47 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-02-28 01:18:50.361031 | orchestrator | 2026-02-28 01:18:50.735954 | orchestrator | 2026-02-28 01:18:50.743527 | orchestrator | --> DEPLOY IN A NUTSHELL -- END -- Sat Feb 28 01:18:50 UTC 2026 2026-02-28 01:18:50.743592 | orchestrator | 2026-02-28 01:18:51.141456 | orchestrator | ok: Runtime: 0:37:14.728733 2026-02-28 01:18:51.389932 | 2026-02-28 01:18:51.390091 | TASK [Bootstrap services] 2026-02-28 01:18:52.151386 | orchestrator | 2026-02-28 01:18:52.151547 | orchestrator | # BOOTSTRAP 2026-02-28 01:18:52.151564 | orchestrator | 2026-02-28 01:18:52.151573 | orchestrator | + set -e 2026-02-28 01:18:52.151582 | orchestrator | + echo 2026-02-28 01:18:52.151592 | orchestrator | + echo '# BOOTSTRAP' 2026-02-28 01:18:52.151604 | orchestrator | + echo 2026-02-28 01:18:52.151636 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap-services.sh 2026-02-28 01:18:52.162058 | orchestrator | + set -e 2026-02-28 01:18:52.162162 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/300-openstack.sh 2026-02-28 01:18:58.356357 | orchestrator | 2026-02-28 01:18:58 | INFO  | It takes a moment until task 3172168f-e785-4440-b90f-e2c9cea67107 (flavor-manager) has been started and output is visible here. 2026-02-28 01:19:07.208428 | orchestrator | 2026-02-28 01:19:02 | INFO  | Flavor SCS-1L-1 created 2026-02-28 01:19:07.208560 | orchestrator | 2026-02-28 01:19:02 | INFO  | Flavor SCS-1L-1-5 created 2026-02-28 01:19:07.208579 | orchestrator | 2026-02-28 01:19:02 | INFO  | Flavor SCS-1V-2 created 2026-02-28 01:19:07.208592 | orchestrator | 2026-02-28 01:19:02 | INFO  | Flavor SCS-1V-2-5 created 2026-02-28 01:19:07.208604 | orchestrator | 2026-02-28 01:19:03 | INFO  | Flavor SCS-1V-4 created 2026-02-28 01:19:07.208615 | orchestrator | 2026-02-28 01:19:03 | INFO  | Flavor SCS-1V-4-10 created 2026-02-28 01:19:07.208627 | orchestrator | 2026-02-28 01:19:03 | INFO  | Flavor SCS-1V-8 created 2026-02-28 01:19:07.208639 | orchestrator | 2026-02-28 01:19:03 | INFO  | Flavor SCS-1V-8-20 created 2026-02-28 01:19:07.208665 | orchestrator | 2026-02-28 01:19:03 | INFO  | Flavor SCS-2V-4 created 2026-02-28 01:19:07.208677 | orchestrator | 2026-02-28 01:19:03 | INFO  | Flavor SCS-2V-4-10 created 2026-02-28 01:19:07.208689 | orchestrator | 2026-02-28 01:19:04 | INFO  | Flavor SCS-2V-8 created 2026-02-28 01:19:07.208700 | orchestrator | 2026-02-28 01:19:04 | INFO  | Flavor SCS-2V-8-20 created 2026-02-28 01:19:07.208711 | orchestrator | 2026-02-28 01:19:04 | INFO  | Flavor SCS-2V-16 created 2026-02-28 01:19:07.208722 | orchestrator | 2026-02-28 01:19:04 | INFO  | Flavor SCS-2V-16-50 created 2026-02-28 01:19:07.208733 | orchestrator | 2026-02-28 01:19:04 | INFO  | Flavor SCS-4V-8 created 2026-02-28 01:19:07.208744 | orchestrator | 2026-02-28 01:19:04 | INFO  | Flavor SCS-4V-8-20 created 2026-02-28 01:19:07.208755 | orchestrator | 2026-02-28 01:19:04 | INFO  | Flavor SCS-4V-16 created 2026-02-28 01:19:07.208766 | orchestrator | 2026-02-28 01:19:05 | INFO  | Flavor SCS-4V-16-50 created 2026-02-28 01:19:07.208778 | orchestrator | 2026-02-28 01:19:05 | INFO  | Flavor SCS-4V-32 created 2026-02-28 01:19:07.208789 | orchestrator | 2026-02-28 01:19:05 | INFO  | Flavor SCS-4V-32-100 created 2026-02-28 01:19:07.208800 | orchestrator | 2026-02-28 01:19:05 | INFO  | Flavor SCS-8V-16 created 2026-02-28 01:19:07.208811 | orchestrator | 2026-02-28 01:19:05 | INFO  | Flavor SCS-8V-16-50 created 2026-02-28 01:19:07.208842 | orchestrator | 2026-02-28 01:19:05 | INFO  | Flavor SCS-8V-32 created 2026-02-28 01:19:07.208864 | orchestrator | 2026-02-28 01:19:05 | INFO  | Flavor SCS-8V-32-100 created 2026-02-28 01:19:07.208876 | orchestrator | 2026-02-28 01:19:06 | INFO  | Flavor SCS-16V-32 created 2026-02-28 01:19:07.208887 | orchestrator | 2026-02-28 01:19:06 | INFO  | Flavor SCS-16V-32-100 created 2026-02-28 01:19:07.208898 | orchestrator | 2026-02-28 01:19:06 | INFO  | Flavor SCS-2V-4-20s created 2026-02-28 01:19:07.208909 | orchestrator | 2026-02-28 01:19:06 | INFO  | Flavor SCS-4V-8-50s created 2026-02-28 01:19:07.208920 | orchestrator | 2026-02-28 01:19:06 | INFO  | Flavor SCS-4V-16-100s created 2026-02-28 01:19:07.208931 | orchestrator | 2026-02-28 01:19:06 | INFO  | Flavor SCS-8V-32-100s created 2026-02-28 01:19:09.838572 | orchestrator | 2026-02-28 01:19:09 | INFO  | Trying to run play bootstrap-basic in environment openstack 2026-02-28 01:19:17.956060 | orchestrator | 2026-02-28 01:19:17 | INFO  | Prepare task for execution of bootstrap-basic. 2026-02-28 01:19:18.040374 | orchestrator | 2026-02-28 01:19:18 | INFO  | Task 1db55e4d-fd69-4154-99a4-4c4d04ecf1c9 (bootstrap-basic) was prepared for execution. 2026-02-28 01:19:18.040456 | orchestrator | 2026-02-28 01:19:18 | INFO  | It takes a moment until task 1db55e4d-fd69-4154-99a4-4c4d04ecf1c9 (bootstrap-basic) has been started and output is visible here. 2026-02-28 01:20:07.433322 | orchestrator | 2026-02-28 01:20:07.433463 | orchestrator | PLAY [Bootstrap basic OpenStack services] ************************************** 2026-02-28 01:20:07.433494 | orchestrator | 2026-02-28 01:20:07.433515 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-28 01:20:07.433536 | orchestrator | Saturday 28 February 2026 01:19:22 +0000 (0:00:00.070) 0:00:00.070 ***** 2026-02-28 01:20:07.433555 | orchestrator | ok: [localhost] 2026-02-28 01:20:07.433578 | orchestrator | 2026-02-28 01:20:07.433597 | orchestrator | TASK [Get volume type LUKS] **************************************************** 2026-02-28 01:20:07.433617 | orchestrator | Saturday 28 February 2026 01:19:24 +0000 (0:00:02.126) 0:00:02.196 ***** 2026-02-28 01:20:07.433640 | orchestrator | ok: [localhost] 2026-02-28 01:20:07.433660 | orchestrator | 2026-02-28 01:20:07.433680 | orchestrator | TASK [Create volume type LUKS] ************************************************* 2026-02-28 01:20:07.433700 | orchestrator | Saturday 28 February 2026 01:19:33 +0000 (0:00:08.864) 0:00:11.061 ***** 2026-02-28 01:20:07.433722 | orchestrator | changed: [localhost] 2026-02-28 01:20:07.433742 | orchestrator | 2026-02-28 01:20:07.433763 | orchestrator | TASK [Create public network] *************************************************** 2026-02-28 01:20:07.433782 | orchestrator | Saturday 28 February 2026 01:19:41 +0000 (0:00:07.999) 0:00:19.060 ***** 2026-02-28 01:20:07.433801 | orchestrator | changed: [localhost] 2026-02-28 01:20:07.433821 | orchestrator | 2026-02-28 01:20:07.433847 | orchestrator | TASK [Set public network to default] ******************************************* 2026-02-28 01:20:07.433869 | orchestrator | Saturday 28 February 2026 01:19:47 +0000 (0:00:05.679) 0:00:24.740 ***** 2026-02-28 01:20:07.433924 | orchestrator | changed: [localhost] 2026-02-28 01:20:07.433943 | orchestrator | 2026-02-28 01:20:07.433960 | orchestrator | TASK [Create public subnet] **************************************************** 2026-02-28 01:20:07.433980 | orchestrator | Saturday 28 February 2026 01:19:54 +0000 (0:00:06.899) 0:00:31.640 ***** 2026-02-28 01:20:07.433999 | orchestrator | changed: [localhost] 2026-02-28 01:20:07.434096 | orchestrator | 2026-02-28 01:20:07.434120 | orchestrator | TASK [Create default IPv4 subnet pool] ***************************************** 2026-02-28 01:20:07.434139 | orchestrator | Saturday 28 February 2026 01:19:58 +0000 (0:00:04.767) 0:00:36.408 ***** 2026-02-28 01:20:07.434158 | orchestrator | changed: [localhost] 2026-02-28 01:20:07.434178 | orchestrator | 2026-02-28 01:20:07.434198 | orchestrator | TASK [Create manager role] ***************************************************** 2026-02-28 01:20:07.434233 | orchestrator | Saturday 28 February 2026 01:20:03 +0000 (0:00:04.205) 0:00:40.613 ***** 2026-02-28 01:20:07.434246 | orchestrator | ok: [localhost] 2026-02-28 01:20:07.434257 | orchestrator | 2026-02-28 01:20:07.434453 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-28 01:20:07.434473 | orchestrator | localhost : ok=8  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-28 01:20:07.434493 | orchestrator | 2026-02-28 01:20:07.434512 | orchestrator | 2026-02-28 01:20:07.434532 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-28 01:20:07.434551 | orchestrator | Saturday 28 February 2026 01:20:07 +0000 (0:00:03.980) 0:00:44.594 ***** 2026-02-28 01:20:07.434570 | orchestrator | =============================================================================== 2026-02-28 01:20:07.434588 | orchestrator | Get volume type LUKS ---------------------------------------------------- 8.86s 2026-02-28 01:20:07.434642 | orchestrator | Create volume type LUKS ------------------------------------------------- 8.00s 2026-02-28 01:20:07.434658 | orchestrator | Set public network to default ------------------------------------------- 6.90s 2026-02-28 01:20:07.434677 | orchestrator | Create public network --------------------------------------------------- 5.68s 2026-02-28 01:20:07.434696 | orchestrator | Create public subnet ---------------------------------------------------- 4.77s 2026-02-28 01:20:07.434714 | orchestrator | Create default IPv4 subnet pool ----------------------------------------- 4.21s 2026-02-28 01:20:07.434731 | orchestrator | Create manager role ----------------------------------------------------- 3.98s 2026-02-28 01:20:07.434747 | orchestrator | Gathering Facts --------------------------------------------------------- 2.13s 2026-02-28 01:20:10.086613 | orchestrator | 2026-02-28 01:20:10 | INFO  | It takes a moment until task 05b81423-cbf0-4d08-9a50-769dc84bd296 (image-manager) has been started and output is visible here. 2026-02-28 01:20:49.910123 | orchestrator | 2026-02-28 01:20:13 | INFO  | Processing image 'Cirros 0.6.2' 2026-02-28 01:20:49.910314 | orchestrator | 2026-02-28 01:20:13 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img: 302 2026-02-28 01:20:49.910346 | orchestrator | 2026-02-28 01:20:13 | INFO  | Importing image Cirros 0.6.2 2026-02-28 01:20:49.910364 | orchestrator | 2026-02-28 01:20:13 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2026-02-28 01:20:49.910383 | orchestrator | 2026-02-28 01:20:15 | INFO  | Waiting for image to leave queued state... 2026-02-28 01:20:49.910401 | orchestrator | 2026-02-28 01:20:17 | INFO  | Waiting for import to complete... 2026-02-28 01:20:49.910419 | orchestrator | 2026-02-28 01:20:27 | INFO  | Import of 'Cirros 0.6.2' successfully completed, reloading images 2026-02-28 01:20:49.910436 | orchestrator | 2026-02-28 01:20:27 | INFO  | Checking parameters of 'Cirros 0.6.2' 2026-02-28 01:20:49.910454 | orchestrator | 2026-02-28 01:20:27 | INFO  | Setting internal_version = 0.6.2 2026-02-28 01:20:49.910471 | orchestrator | 2026-02-28 01:20:27 | INFO  | Setting image_original_user = cirros 2026-02-28 01:20:49.910489 | orchestrator | 2026-02-28 01:20:27 | INFO  | Adding tag os:cirros 2026-02-28 01:20:49.910507 | orchestrator | 2026-02-28 01:20:27 | INFO  | Setting property architecture: x86_64 2026-02-28 01:20:49.910524 | orchestrator | 2026-02-28 01:20:28 | INFO  | Setting property hw_disk_bus: scsi 2026-02-28 01:20:49.910541 | orchestrator | 2026-02-28 01:20:28 | INFO  | Setting property hw_rng_model: virtio 2026-02-28 01:20:49.910558 | orchestrator | 2026-02-28 01:20:28 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-02-28 01:20:49.910576 | orchestrator | 2026-02-28 01:20:28 | INFO  | Setting property hw_watchdog_action: reset 2026-02-28 01:20:49.910594 | orchestrator | 2026-02-28 01:20:29 | INFO  | Setting property hypervisor_type: qemu 2026-02-28 01:20:49.910629 | orchestrator | 2026-02-28 01:20:29 | INFO  | Setting property os_distro: cirros 2026-02-28 01:20:49.910647 | orchestrator | 2026-02-28 01:20:29 | INFO  | Setting property os_purpose: minimal 2026-02-28 01:20:49.910664 | orchestrator | 2026-02-28 01:20:29 | INFO  | Setting property replace_frequency: never 2026-02-28 01:20:49.910681 | orchestrator | 2026-02-28 01:20:29 | INFO  | Setting property uuid_validity: none 2026-02-28 01:20:49.910699 | orchestrator | 2026-02-28 01:20:30 | INFO  | Setting property provided_until: none 2026-02-28 01:20:49.910717 | orchestrator | 2026-02-28 01:20:30 | INFO  | Setting property image_description: Cirros 2026-02-28 01:20:49.910735 | orchestrator | 2026-02-28 01:20:30 | INFO  | Setting property image_name: Cirros 2026-02-28 01:20:49.910780 | orchestrator | 2026-02-28 01:20:30 | INFO  | Setting property internal_version: 0.6.2 2026-02-28 01:20:49.910798 | orchestrator | 2026-02-28 01:20:30 | INFO  | Setting property image_original_user: cirros 2026-02-28 01:20:49.910815 | orchestrator | 2026-02-28 01:20:31 | INFO  | Setting property os_version: 0.6.2 2026-02-28 01:20:49.910834 | orchestrator | 2026-02-28 01:20:31 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2026-02-28 01:20:49.910852 | orchestrator | 2026-02-28 01:20:31 | INFO  | Setting property image_build_date: 2023-05-30 2026-02-28 01:20:49.910869 | orchestrator | 2026-02-28 01:20:31 | INFO  | Checking status of 'Cirros 0.6.2' 2026-02-28 01:20:49.910886 | orchestrator | 2026-02-28 01:20:31 | INFO  | Checking visibility of 'Cirros 0.6.2' 2026-02-28 01:20:49.910909 | orchestrator | 2026-02-28 01:20:31 | INFO  | Setting visibility of 'Cirros 0.6.2' to 'public' 2026-02-28 01:20:49.910928 | orchestrator | 2026-02-28 01:20:31 | INFO  | Processing image 'Cirros 0.6.3' 2026-02-28 01:20:49.910946 | orchestrator | 2026-02-28 01:20:32 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img: 302 2026-02-28 01:20:49.910963 | orchestrator | 2026-02-28 01:20:32 | INFO  | Importing image Cirros 0.6.3 2026-02-28 01:20:49.910979 | orchestrator | 2026-02-28 01:20:32 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2026-02-28 01:20:49.910995 | orchestrator | 2026-02-28 01:20:32 | INFO  | Waiting for image to leave queued state... 2026-02-28 01:20:49.911013 | orchestrator | 2026-02-28 01:20:34 | INFO  | Waiting for import to complete... 2026-02-28 01:20:49.911053 | orchestrator | 2026-02-28 01:20:44 | INFO  | Import of 'Cirros 0.6.3' successfully completed, reloading images 2026-02-28 01:20:49.911072 | orchestrator | 2026-02-28 01:20:45 | INFO  | Checking parameters of 'Cirros 0.6.3' 2026-02-28 01:20:49.911088 | orchestrator | 2026-02-28 01:20:45 | INFO  | Setting internal_version = 0.6.3 2026-02-28 01:20:49.911103 | orchestrator | 2026-02-28 01:20:45 | INFO  | Setting image_original_user = cirros 2026-02-28 01:20:49.911120 | orchestrator | 2026-02-28 01:20:45 | INFO  | Adding tag os:cirros 2026-02-28 01:20:49.911137 | orchestrator | 2026-02-28 01:20:45 | INFO  | Setting property architecture: x86_64 2026-02-28 01:20:49.911154 | orchestrator | 2026-02-28 01:20:45 | INFO  | Setting property hw_disk_bus: scsi 2026-02-28 01:20:49.911170 | orchestrator | 2026-02-28 01:20:45 | INFO  | Setting property hw_rng_model: virtio 2026-02-28 01:20:49.911186 | orchestrator | 2026-02-28 01:20:46 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-02-28 01:20:49.911203 | orchestrator | 2026-02-28 01:20:46 | INFO  | Setting property hw_watchdog_action: reset 2026-02-28 01:20:49.911219 | orchestrator | 2026-02-28 01:20:46 | INFO  | Setting property hypervisor_type: qemu 2026-02-28 01:20:49.911260 | orchestrator | 2026-02-28 01:20:46 | INFO  | Setting property os_distro: cirros 2026-02-28 01:20:49.911276 | orchestrator | 2026-02-28 01:20:46 | INFO  | Setting property os_purpose: minimal 2026-02-28 01:20:49.911292 | orchestrator | 2026-02-28 01:20:47 | INFO  | Setting property replace_frequency: never 2026-02-28 01:20:49.911310 | orchestrator | 2026-02-28 01:20:47 | INFO  | Setting property uuid_validity: none 2026-02-28 01:20:49.911327 | orchestrator | 2026-02-28 01:20:47 | INFO  | Setting property provided_until: none 2026-02-28 01:20:49.911344 | orchestrator | 2026-02-28 01:20:47 | INFO  | Setting property image_description: Cirros 2026-02-28 01:20:49.911376 | orchestrator | 2026-02-28 01:20:47 | INFO  | Setting property image_name: Cirros 2026-02-28 01:20:49.911395 | orchestrator | 2026-02-28 01:20:48 | INFO  | Setting property internal_version: 0.6.3 2026-02-28 01:20:49.911412 | orchestrator | 2026-02-28 01:20:48 | INFO  | Setting property image_original_user: cirros 2026-02-28 01:20:49.911429 | orchestrator | 2026-02-28 01:20:48 | INFO  | Setting property os_version: 0.6.3 2026-02-28 01:20:49.911445 | orchestrator | 2026-02-28 01:20:48 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2026-02-28 01:20:49.911462 | orchestrator | 2026-02-28 01:20:49 | INFO  | Setting property image_build_date: 2024-09-26 2026-02-28 01:20:49.911478 | orchestrator | 2026-02-28 01:20:49 | INFO  | Checking status of 'Cirros 0.6.3' 2026-02-28 01:20:49.911495 | orchestrator | 2026-02-28 01:20:49 | INFO  | Checking visibility of 'Cirros 0.6.3' 2026-02-28 01:20:49.911512 | orchestrator | 2026-02-28 01:20:49 | INFO  | Setting visibility of 'Cirros 0.6.3' to 'public' 2026-02-28 01:20:50.290802 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh 2026-02-28 01:20:52.728351 | orchestrator | 2026-02-28 01:20:52 | INFO  | date: 2026-02-27 2026-02-28 01:20:52.728448 | orchestrator | 2026-02-28 01:20:52 | INFO  | image: octavia-amphora-haproxy-2025.1.20260227.qcow2 2026-02-28 01:20:52.728485 | orchestrator | 2026-02-28 01:20:52 | INFO  | url: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2025.1.20260227.qcow2 2026-02-28 01:20:52.728512 | orchestrator | 2026-02-28 01:20:52 | INFO  | checksum_url: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2025.1.20260227.qcow2.CHECKSUM 2026-02-28 01:20:52.880444 | orchestrator | 2026-02-28 01:20:52 | INFO  | checksum: localhost | ok: "/var/lib/zuul/builds/4580c583255a4bbaa1e0ce291d0fa749/work/logs" 2026-02-28 01:21:24.411832 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/4580c583255a4bbaa1e0ce291d0fa749/work/artifacts" 2026-02-28 01:21:24.676560 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/4580c583255a4bbaa1e0ce291d0fa749/work/docs" 2026-02-28 01:21:24.701090 | 2026-02-28 01:21:24.701255 | LOOP [fetch-output : Collect logs, artifacts and docs] 2026-02-28 01:21:25.607099 | orchestrator | changed: .d..t...... ./ 2026-02-28 01:21:25.607403 | orchestrator | changed: All items complete 2026-02-28 01:21:25.607451 | 2026-02-28 01:21:26.339698 | orchestrator | changed: .d..t...... ./ 2026-02-28 01:21:27.108442 | orchestrator | changed: .d..t...... ./ 2026-02-28 01:21:27.136110 | 2026-02-28 01:21:27.136272 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2026-02-28 01:21:27.171513 | orchestrator | skipping: Conditional result was False 2026-02-28 01:21:27.175107 | orchestrator | skipping: Conditional result was False 2026-02-28 01:21:27.197744 | 2026-02-28 01:21:27.197857 | PLAY RECAP 2026-02-28 01:21:27.197965 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2026-02-28 01:21:27.198005 | 2026-02-28 01:21:27.331488 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2026-02-28 01:21:27.332571 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-02-28 01:21:28.067095 | 2026-02-28 01:21:28.067254 | PLAY [Base post] 2026-02-28 01:21:28.081305 | 2026-02-28 01:21:28.081428 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2026-02-28 01:21:29.071826 | orchestrator | changed 2026-02-28 01:21:29.080262 | 2026-02-28 01:21:29.080375 | PLAY RECAP 2026-02-28 01:21:29.080435 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2026-02-28 01:21:29.080496 | 2026-02-28 01:21:29.208627 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-02-28 01:21:29.211153 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2026-02-28 01:21:29.991632 | 2026-02-28 01:21:29.991801 | PLAY [Base post-logs] 2026-02-28 01:21:30.002648 | 2026-02-28 01:21:30.002801 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2026-02-28 01:21:30.461870 | localhost | changed 2026-02-28 01:21:30.479079 | 2026-02-28 01:21:30.479258 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2026-02-28 01:21:30.518745 | localhost | ok 2026-02-28 01:21:30.526602 | 2026-02-28 01:21:30.526764 | TASK [Set zuul-log-path fact] 2026-02-28 01:21:30.554094 | localhost | ok 2026-02-28 01:21:30.565594 | 2026-02-28 01:21:30.565726 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-02-28 01:21:30.604195 | localhost | ok 2026-02-28 01:21:30.611056 | 2026-02-28 01:21:30.611265 | TASK [upload-logs : Create log directories] 2026-02-28 01:21:31.132557 | localhost | changed 2026-02-28 01:21:31.137674 | 2026-02-28 01:21:31.137838 | TASK [upload-logs : Ensure logs are readable before uploading] 2026-02-28 01:21:31.639468 | localhost -> localhost | ok: Runtime: 0:00:00.007226 2026-02-28 01:21:31.643726 | 2026-02-28 01:21:31.643843 | TASK [upload-logs : Upload logs to log server] 2026-02-28 01:21:32.212714 | localhost | Output suppressed because no_log was given 2026-02-28 01:21:32.217237 | 2026-02-28 01:21:32.217417 | LOOP [upload-logs : Compress console log and json output] 2026-02-28 01:21:32.268913 | localhost | skipping: Conditional result was False 2026-02-28 01:21:32.287348 | localhost | skipping: Conditional result was False 2026-02-28 01:21:32.297789 | 2026-02-28 01:21:32.297977 | LOOP [upload-logs : Upload compressed console log and json output] 2026-02-28 01:21:32.342529 | localhost | skipping: Conditional result was False 2026-02-28 01:21:32.342782 | 2026-02-28 01:21:32.356497 | localhost | skipping: Conditional result was False 2026-02-28 01:21:32.369716 | 2026-02-28 01:21:32.369965 | LOOP [upload-logs : Upload console log and json output]